Hi all,

this might be of interest to some:

 

COMPUTING IN THE HUMANITIES

Sofia, Bulgaria, 8.4.2015 - 9.4.2015

Abstract

The workshop Computing in Humanities will contribute to the development of a methodological layer that allows humanities researchers to develop, refine and share research methods that allow them to create and make best use digital methods and collections. The worksop is intended to early-stage researchers working in the humanities.

Further details

The NeDiMAH Network aims at promoting and application of advanced Information and Communication technologies in the humanities across Europe. The Network brings together members of thematic Working Groups, that examine the use of computationally-based methods for the capture, investigation, analysis, study, modelling, presentation, dissemination, publication and evaluation of humanities materials for research.
The workshop Computing in Humanities will contribute to the development of a methodological layer that allows humanities researchers to develop, refine and share research methods that allow them to create and make best use digital methods and collections.

The main objectives of the workshop are :

• To encourage interdisciplinary work through collaborative research;

• To focus priority research areas

• To create links between research communities in the Humanities;

• To provide dissemination of research.

In particular, the workshop Computing in Humanities aims to answer the following
questions:

• How much of the information in an original source, be it a manuscript, charter or early printed book, should be included in a transcription or edition?

• Is the distinction between the 'substantives', the actual words of the text, and the 'accidentals', features such as spelling, punctuation, page layout etc., a useful one?

• Are the 'accidentals' really of no interest or value?

Traditionally, editors have had to decide at the outside what to include and what to ignore. With TEI-conformant XML encoding one can postpone this decision, as it were, recording as much information as possible and then leaving it to the user to choose how much of it he or she wishes to see. In addition to covering the fundamentals of transcription and description using TEI, the workshop will expose participants to methods by which the encoded text may be presented and/or published electronically.

Furthermore, it is considerably more difficult to extract quantifiable data from text. Some issues to consider here are:

• How can we bridge the gap between the depth of hermeneutics and data analysis? How can we systematize text interpretation?

• How do visualizations change our perception of coded data?

• How can we translate hypotheses into data visualizations and which new questions can network visualizations raise?

The workshop will address the above question and provide hands-on experience with the extraction of network data from texts through the use of methods developed in qualitative data analysis. Participants will work with texts and extract data using an existing coding scheme. Then the workshop will provide participants with the technical skills to use entry-level software tools to visualize and explore social networks.

The workshop is intended to early-stage researchers working in the humanities.

Programme

Wednesday, 8th April 2015

09.00-09.15 Registration
09.15-09.30 Welcome, Svetla Koeva (Institute for Bulgaria Language, Sofia, Bulgaria)
09.30-10:30 Lecture 1 Text technologies for Humanities Research,
 Marko TADIĆ (University of Zagreb, Zagreb, Croatia)
10.30-11.00 Coffee / Tea Break
11.00-12.00 Lecture 2 Text technologies for Humanities Research
, Marko TADIĆ (University of Zagreb, Croatia)
12.00-13.00 Discussion
13.00-14.30 Lunch
14.00-15.30 Lecture 3 Data Extraction and Visualization of Historical Sources, Marten DÜRING (Digital Humanities Lab, Luxembourg)
16.30-16.00 Coffee / tea break
16.00-17.30 Lecture 4 Data Extraction and Visualization of Historical Sources, Marten DÜRING (Digital Humanities Lab, Luxembourg)
19.00 Dinner

Thursday, 9th April 2015

09.00-10.30 Lecture 1 “Transcribing and Describing Primary Sources in TEI XML, Matthew James DRISCOLL (University of Copenhagen, Denmark)
10.30-11.00 Coffee / Tea Break
11.00-12.00 Lecture 2 Transcribing and Describing Primary Sources in TEI XML, Matthew James DRISCOLL (University of Copenhagen, Denmark)
12.00-13.00 Discussion
12.30-14.00 Lunch
14.00-15.30 Lecture 3 Data Extraction and Visualization of Historical Sources, Marten DÜRING (Digital Humanities Lab, Luxembourg)
15.30-16.00 Coffee / tea break
16.00-17.00 Discussion
17.00 End of work shop

Enquiries and registrations should be directed to Professor Svetla Koeva ( svetla@dcla.bas.bg )