I was pretty excited when I saw that one of the sessions at the recent ITI Conference was to look at how corpora can be used as a resource for translators (it’s true – I don’t get out very much). Corpus analysis has a special place in my heart ever since I did a small project on MonoConc in 1997 as part of my first language degree and I was looking forward to seeing how things had changed since then.
I must admit to being slightly put off at first by the session write-up in the ITI programme, where it claimed that corpora were “a new resource for translators”. Now, I don’t consider myself to be anything more than moderately technically aware, and even with my undergraduate experience aside, I knew that corpora had been freely available for use in the field of translation for a long time… Thankfully, the speakers quickly redeemed themselves with their experience and obvious enthusiasm for the tools they were speaking about.
Overall, I felt that not much has changed since my days as a MonoConc student. But small office and home PCs are obviously more powerful, which is probably why corpora are seeing a bit of a revival in the field of translation tools. Basically, any translator who has used Google to research a term, concept or subject area is already familiar with the ways in which a corpora can be useful. Dr Serge Sharoff and Dr Jeremy Munday demonstrated how corpus-based tools can offer the translator a more targeted take on the Google approach by enabling us to search within a carefully defined collection of texts. I think they may have intended this to be a more interactive session than it actually was, but given the unexpectedly large conference attendance and the fact that it is very difficult to “explain” software, I think they did admirably well. I’d like to have heard them speak a bit more about how this could apply to more experienced translators however. I thought Serge was quite an amusing speaker, and it was good to put a face to Jeremy Munday’s name – anyone who has studied translation in the past 15 years or so is probably well aware of him through his books on translation theory.
Dr Ana Julia Perrotti-Garcia suggested that translators could ensure even more reliable results by building their own customised corpora, and then analysing them using any of a number of free tools. She also outlined the steps involved in creating a customised corpora. There were some practical tips in this session, but again, it would have been good to hear her speak more about how she used her customised corpora to develop her skills in her area of specialisation, rather than just her English (second) language skills.
Overall, the message was clear: analytical tools such as MonoConc, WordSmith or AntConc are of particular interest to trainee translators and those who translate into a non-native language (yes, I know that’s against the ITI’s Code of Conduct, but it’s a reality for many translators due to the country in which they live). However, these tools also offer the more experienced translator a great way of further developing and improving their translation skills, and I’d love to see someone offer a session in this area in the future.
Anyone interested in checking out more information about corpora, I’d recommended starting with this excellent site, which also contains an up-to-date list of free and low cost tools. There will also be a more detailed write-up of this session in the next ITI Bulletin.