How to automatically add subtitles in DaVinci Resolve – An easy 10 step guide

This gallery contains 25 photos.

Did you know that there is an easy way to automatically add subtitles in DaVinci Resolve? This is a simple tutorial that will save you a lot of time and energy when editing your videos. You will need: Step 1Open … Continue reading

Final Session (Summer Term 2023)

During our last session on July 13, 2023 each student group presented the final results of the projects we have done during the course of this semester. It was a great way of getting a better understanding of the individual folktales besides the ones we were working on ourselves. Together we took a look at the different video editing and coding experiences as well as talked about our individual difficulties during the process.

Coding

In the beginning of this semester most of the students, myself included, struggled with the coding part of this class. The most common mistakes were such as:

  • forgetting to close the tags
  • changing the geographic coordinates in the header
  • Finding quotes in the text so it can be properly coded
  • Finding all words for the notes and glossary
  • not using the <q> tag

At least for me it was a foreign experience and way beyond my “academic comfort zone”. Nonetheless it was an experience that greatly benefitted me in the end.

Video Editing

This class wasn’t the first time I used video editing programs but I never worked with DaVinci Resolve before. Since I missed both tutorial sessions I had to figure the works out by myself but thankfully our instructors provided us with a detailed step by step guide. Nonetheless there were a few things that really proved difficult in the beginning:

  • locking the subtitles and setting them at the right place (the timestamps were sometimes confusing)
  • adding a title page without shifting any of the subtitle, audio or video tracks
  • inserting the credits at the end

These were all things that most of the students struggled with and together we came up with helpful suggestions how to solve any of the before mentioned problems, e. g. the use of additional editing programs or to create the videos in multiple steps to avoid the shifting of the subtitles. In the end most of us felt confident in using DaVinci Resolve again with considerably less effort.

Results

At the end of our last session we talked about the class in general and gave feedback on our individual experiences and accomplishments. Personally I am really glad that I had the chance to participate since I gained a lot of new skills and insight into the Konkomba culture.

On the Subtitling of Orature

As I was unable to join today’s session, I’m going to discuss the process of video editing and subtitling which we began last week instead on focusing on the topics discussed in class.

The small amount of video editing experience I had going into this project didn’t quite prepare me for just how finicky this ended up being. We were kindly provided with translated and timestamped subtitles for our respective videos, but the editing process was much more complex than just copy-pasting the text and numbers. There are some general rules that good subtitles need to abide by in order to fulfill their purpose. Ultimately, I needed to adjust most subtitles to a certain degree to meet those requirements.

An Arduous Process

The size and letter spacing of the subtitles is something that can easily be adjusted based on intuition alone, but the same cannot be said about the two main problems I encountered:

Firstly, the duration and pacing of the subtitles. One full second is generally considered to be the minimum, though this obviously only works for very short subtitles. In some cases, a storyteller may speak rather quickly, forcing the subtitles to proceed and change at a very fast rate, leading to a high amount of characters per second (CPS), which becomes hard to read. There was a section in one of the videos where I ultimately had to combine two separate subtitles into one because even just the small pause between them pushed the CPS beyond 30, which is much too fast for most people. Since subtitles are meant to promote accessibility, this obviously wouldn’t do.

As long as the CPS number isn’t red, the subtitle is more or less reasonably paced.

Secondly, dealing with multiple speakers. In one video, there is a second person who interjects into the story with a short comment and later joins the storyteller in song. The problem was that, aside from the song, both people didn’t really speak simultaneously, making a shared subtitle for both feel strange as it would either begin too early before the person in question started speaking, or linger too long, which also just felt slightly off. As a result, I split the subtitles for the storyteller and the audience member into two separate regions that could appear and disappear separately. I also colored them differently, so that it was easier to tell at a glance who was speaking.

Different speakers can be distinguished using names or titles, colored subtitles, or hyphens.

In conclusion:

Orature like what we have been working with in this course is meant to be performed. Making those performances accessible and understandable to people who don’t speak the language of the storyteller is an essential part in the process of demarginalization.

Working on editing and subtitling these videos has given me a new appreciation for anyone who has ever provided high-quality subtitles for any kind of media. It’s time-consuming, but ultimately very important work, not just in the context of our course.

Introduction to video editing and subtitling

Subtitle Example: Bilinyi Chikpaab James narrates “Nachiin Pays for Feasting on Unyii’s Children” / Source: HHU Mediathek

Hello everyone!
In this post I’m going to tell you a little bit about our last “Demarginalising Orature” session. As you may have guessed from the title, we talked about and worked on video editing and especially subtitling. In the past few weeks we have learned about Konkomba folktales, language and culture, we have worked with some of the folktales by encoding them using TEI. And now the next step is editing videos of Konkomba people narrating the folktales. Ultimately, you will find them in the HHU Mediathek.

Our last session

So, what happened in our seminar? Firstly, our tutor Jana gave a presentation, introducing us to a video editing program called DaVince Resolve (DVR). She also introduced us to some of the basics of subtitling. E.g. the length of a subtitle, which should be no more that 30 characters per second. The ideal length is 15-20 CPS but as Jana pointed out, this is quite difficult to achieve. Futhermore, a subtitle should always start synchronously with the speech (defining a subtitle’s start and stop point is called spotting). If the subtitle comprises 2 lines, it should be presented in pyramid form, so the upper sentence should ideally be shorter than the lower one. There are many more rules and conventions regarding subtitling but naming them all would go beyond the scope of this blog entry.

DaVinci Resolve and SubtitleEdit

“Edit” page in DaVince Resolve / Source: https://www.blackmagicdesign.com/products/davinciresolve

Then, a fellow student, Lisa, also gave a short presentation on DVR and also introduced us to another program. According to both Jana and Lisa, DVR can be a bit difficult to work with, especially in the beginning. But luckily Lisa is familiar with another subtitling software, which she introduced to us as well. It is called SubtitleEdit. You can find a very useful step by step tutorial for DVR and SubtitleEdit in her blog entry. Some “fun facts”: According to their website, DaVinci Resolve is Hollywood’s #1 post solution. Apparently, many films and TV-shows are edited in DVR. It was first released in 2004. SubtitleEdit, on the other hand, is a free open-source subtitle editor.

SubtitleEdit interface / Screenshot by Lisa

Conclusion

To sum it up, in our last session we learned about subtitles and subtiling tools. During our session, DaVinci Resolve made my laptop crash and there were definitely some initial difficulties. But SubtitleEdit is a bit more beginner-friendly and in the end we will manage to subtitle all our folktale videos, I am sure! Yet another step to conserving orality and making Konkomba folktales accessible to a broader audience!

Visual Narratives – Subtitling a Recording of a Konkomba Folktale

When reading folktales that have rarely been written down before, it is important to keep their origin in mind. Konkomba folktales have been passed on orally for a long time and have only recently been written down and translated from Likpakpaln into English. Storytelling in the Konkomba’s culture takes place in a very specific way that is essential to them, a storyteller tells the tales to a group, with which he interacts throughout the process in a theatrical way, and often audience participation is what makes the story complete. The questions they ask or the things they say make the storyteller tell the story in its entirety. These interactions can be seen in videos of the storytelling sessions. Many of the folktales have been written down and recorded by Tasun Tidorchibe, who worked on this project with us and provided the material, such as videos and documents of the stories. This project will help to make the folktales more accessible. The storyteller incorporates a lot of emotion and expressive body language into his session, which gives the viewer an insight into the story’s highs and lows.  This performative element of the storytelling helps us understand the people even though we don’t speak the language.  Therefore, when the storyteller or the audience laugh or show other emotions in reaction to the narrative, their interpretation becomes more obvious to us. When reading the story in English, there will be some words that can not be translated, but a glossary and footnotes will hold explanations for Likpakpaln terms. This will be incorporated in our TEI document, that Nadine Hoffmann is working on. Nadine has also summarized the story in her blog. Because the storyteller in the video of “Nachiin Pays for Feasting on Unyii’s Children” tells the original story, translation and subtitles are a way of following the session and understanding it at the same time. His name is Bilinyi Chikpaab and the video was recorded on the 18th of march 2022 in Kutol. The story told is called “Nachiin Pays for Feasting on Unyii’s Children” and includes multiple characters, such as a wolf, a rabbit and a crocodile. 

When creating subtitles it is important that the text is timestamped so it is accurate to the video. Tasun Tidorchibe has timestamped the tale for this video. The subtitles have to be on the screen in time and long enough to be read fully. In addition to that, they also have to be in a style that makes them clearly readable. This can easily be achieved by creating a background for the text that sets the words apart from the  background. What I did in this case, is make the background transparent, so that the background isn’t lost. The font should also be as simple as possible, so that it is not distracting. I chose Open Sans Semi Bold and made the font size 48. 

There is a lot to see in the video and the subtitles shouldn’t compromise the storyteller’s presence. A block of subtitles should not be more than two lines of text. For example: The text at 00:02:55:05 to 00:03:15:05 and 00:06:30:00 to 00:06:37:00 had to be split up according to that rule. I started making a list of text that had to be split, looked at the words and then decided where to split the sentence, so it wouldn’t be too disruptive. There were 20 blocks in total, that had to be split. The subtitles in the program are numbered in accordance with those on my list and the timestamped sheet. To not get those numbers confused whilst editing, I decided to put the new subtitle chunks in a separate line at first and then drag them down into the original one, when I was done splitting all of them.  

After I had created all the subtitles and decided on the style, it was time to check the CPS (Characters Per Second) of each text block. Sometimes, an accurate timestamp means that the CPS will be too high (the ideal CPS is 30), so I had to adjust some of them. Doing that, I realized that it made more sense to try and keep the accuracy of the timestamp set at the beginning of a text block and push it as much as possible to the back, so that there would be more time to read it. I used the DavinciResolve program to create the subtitles, because it gives the most accurate time stamp format, which is hh:mm:ss:SS. This format allowed me to try out how to create the perfect length for a subtitle, but do minimal adjustment  and not push them out of place. I kept watching the video repeatedly to see if everything was in place and if everything was easy to see and read. Some of the blocks had more than 30 CPS, but I made sure, that it was still readable. 

Having completed all these steps, it is time to rewatch the video again and make sure all is in place. Watching the video now, the story comes to life and English readers can understand the emotions around the storyline perfectly.