2013 AOCR Hackathon Wiki: Difference between revisions
Jump to navigation
Jump to search
m (→Group Notes) |
|||
Line 8: | Line 8: | ||
=== [https://www.dropbox.com/s/p8zjm0ajaj838tw/HackathonIconfAgendaFinal.docx Day-By-Day Schedule] === | === [https://www.dropbox.com/s/p8zjm0ajaj838tw/HackathonIconfAgendaFinal.docx Day-By-Day Schedule] === | ||
<br> | |||
=== Group Notes === | === Group Notes === |
Revision as of 18:20, 7 February 2013
Welcome to the 2013 iDigBio AOCR Hackathon Wiki
- Short URL to this hackathon wiki http://tinyurl.com/aocrhackathonwiki
- Those participating in the first iDigBio AOCR Hackathon need an iDigBio account.
- Note: This wiki page undergoing frequent updates and some participants have wiki edit permissions and will add to / update / edit these pages before, during and after the hackathon.
- AOCR Working Group Wiki
- AOCR October 2012 Working Group Meeting Presentations
- iConference 2013 iDigBio AOCR WG Wiki
Day-By-Day Schedule
Group Notes
- Please use AOCRhackNotes to Capture your ideas, insights collaboratively at our hackathon.
- We are excited to use this process and have found it extremely useful for collaborating effectively!
Meetings
- 11 Jan 2013 via AdobeConnect, 2 - 3 PM EST
- Notes at Google Doc: http://tinyurl.com/aocrhackmeet1
- 17 Jan 2013 via AdobeConnect, 2 - 3 PM EST
- Notes at Google Doc: http://tinyurl.com/aocrhackmeet1
- 25 Jan 2013 via AdobeConnect, 2 - 3 PM EST
- Notes at Google Doc: http://tinyurl.com/aocrhackmeet1
- 1 Feb 2013 via AdobeConnect, 2 - 3 PM EST
- Notes at Google Doc: http://tinyurl.com/aocrhackmeet1
- 8 Feb 2013 via AdobeConnect, 3 - 4 PM EST
- Notes at Google Doc: http://tinyurl.com/aocrhackmeet1
Links to Logistics, Communication, and Participant Information
- Participant List
- Participant Related Projects
- Call for Participation
- Application Form
- Travel, Food, Lodging, Connectivity Logistics
- 2013 Hackathon Listserv, a mailing list for Hackathon Participants at aocr-hackathon-l@lists.ufl.edu
Remote Participation
- Remote Participation via AdobeConnect
- Join us on Wednesday morning 9 AM - Noon CST to hear our hackathon participants report back to the group and share their progress on parsing algorithms.
- Please sign in 15 - 20 minutes early to learn how Adobe Connect works.
- Help with AdobeConnect
- Want to try your skills at parsing? Share your User Interface ideas? Got mad skills for developing image-analysis and image segmentation algorithms? Know where the great authority files (data dictionaries) are? Do tell, via remote participation or in our online group notes doc.
Overview of the Challenge
- 2013 iDigBio AOCR Hackathon Challenge
- overall description of The Challenge
- The Specific Task: parse OCR output to find values for these 2013 hackathon data elements
- Metrics and Evaluation to be used
- Three Data Sets
- There are three data sets, that is, three different sets of images of museum specimen labels. Participants, working alone or in groups, may work on one or more data sets as they choose. The sets have been ranked, easy, medium, hard, as an estimate of how difficult it might be to successfully get good parsed data from the OCR output from each data set.
- Accessing the Data
Frequently Asked Questions
Deliverables
- report on progress beyond parsing, accomplishments, and in-the-works
- summary report on progress, including metrics evaluation
- Development of OCR SaaS for use by the entire community.
- code from participants (at github)
- all participant's talks posted to the hackathon wiki, including links to their code, comments.
- Participants talks and summary reports added to the iDigBio Biblio.
- collected conversation / feedback from whole group (report back summary)
- Social Media
- photographs
- post to Facebook
- blog post to iDigBio
- white paper on the hackathon process.
Choosing Images and Parsing Decisions
Issues that need work
- Known OCR, ML, NLP Issues and challenges
- Human-in-the-loop: User Interface Wish List
- Development of integrated web services that return OCR output to providers. See the AOCR_SaaS
*Thank you NESCent, Hilmar Lapp and the HIP working group for this model.