The project has reached its final stage: all tests have been run, results have been examined and analysed in detail, and conclusions have been reached with great caution. Our research group is now spending more time on building a site with the perfect combination of scholarly content and user-friendly interface. Plenty of work is being done by each member and everyone contributes their knowledge in order to make our project better: the structure of the site is being re-built so that the main research problem is foregrounded; the contents are being re-organised and hyper-linked so that each page has the adequate information with enough emphasis but without too much repetition; citations and references are being added, multimedia such as graphs, photos and screenshots are being embedded in relevant pages and the proper plug-ins are being carefully chosen and installed so that the site can be kept professional while also showing dynamism.
This is the right moment for me to look back and consider what we have learned from conducting the Dynamiter Project. The most obvious is collaborative work. We once discussed the major differences between digital humanities (DH) and conventional humanities scholarship and almost everyone pointed out that one of them would be the communal nature of DH. Kirshenbaum suggests that “digital humanities is a social undertaking” 1 that turns the time-consuming solitary research into a cooperative project in which each member is able to sharing thoughts, argue, coordinate, and collaborate. As novices in authorship attribution, collaborative work helps us gain different aspects of knowledge efficiently from one another, share thoughts on particular topics, and it benefits the implementation of the research project by designating different jobs to different members according to our own interests and academic backgrounds. By exploring Analysing The Dynamiter Page, we could clearly see that although the tests were generally run by particular members who were interested in technical problems, changes of parameters and corpora as well as hypotheses combined the intelligence from members who were concentrated on Louis, Fanny and their works.
Another consideration is about what DH actually brings to humanity scholarship. A similar confusion to Scheinfeldt once came to me when I first entered DH: is DH able to discover new things rather than only confirm existing arguments? 2 My colleagues once argued that as long as there is a new way to look at old materials, there is a progress. By now through the Dynamiter Project, I would like to say it is more than that. Investigating a literary work from statistical perspective not only provides a new toolkit to verify or falsify the old hypothesis, but also generates new problems in the process. The main research problem of our project is focused on the collaboration of Louis and Fanny on The Dynamiter. While given the facts that results showed inconsistent in each run of tests with different methods and the weird cluster of The Half White with Louis’s writings (for details see Who Wrote The Dynamiter Page), we questioned the authorship problems of both works and tried to give possible explanations by conducting more trial and error on software and reading more materials about their lives and writing conditions; and meantime we also questioned the methods and algorithms we used and hoped to find one which could solve our problem most successfully.
By now I will not too worry about dehumanisation which may bring with the use of statistical techniques since digital media are essentially means rather than an end to humanistic scholarship. The results from data computing do not explicitly provide us with ideological explanations; interpretations of both original texts and results are still indispensable. However, we should still be more careful about what biases or expectations or previous knowledge could bring to our analyses: they can bring us with inspirations to our interpretations as well as illusions. We should constantly ask ourselves especially when the results are consistent with our assumptions: do we conduct the experience rationally? Is there any possibility that we choose a method which can lead to the expected results? Do we interpret our results as objectively as possible? How could we avoid misinterpretations to the most extent given the facts that presumptions could not be eschewed?