en W3C - Timed Text Working Group The mission of the Timed Text Working Group is to develop W3C Recommendations for media online captioning by developing and maintaining new versions of the Timed Text Markup Language (TTML) and WebVTT (Web Video Text Tracks) based on implementation experience and interoperability feedback, and the creation of semantic mappings between those languages. Tue, 22 Jul 2025 12:49:42 +0000 Laminas_Feed_Writer 2 (https://getlaminas.org) https://www.w3.org/groups/wg/timed-text/ First Public Working Draft: IMSC Text Profile 1.3 Tue, 22 Jul 2025 06:39:00 +0000 https://www.w3.org/news/2025/first-public-working-draft-imsc-text-profile-1-3/ https://www.w3.org/news/2025/first-public-working-draft-imsc-text-profile-1-3/

The Timed Text Working Group has published a First Public Working Draft of IMSC Text Profile 1.3. This specification defines a text-only profile of [ttml2] intended for subtitle and caption delivery applications worldwide.

It improves over the Text Profile specified at at [ttml-imsc1.2], with the improvements summarized at L. Summary of substantive changes.

]]>
0
Group Note: DAPT Requirements Thu, 08 May 2025 04:36:00 +0000 https://www.w3.org/news/2025/group-note-dapt-requirements/ https://www.w3.org/news/2025/group-note-dapt-requirements/

The Timed Text Working Group has published the DAPT Requirements as a Group Note. The document captures technical requirements for a profile of TTML2 for use in workflows related to dubbing and audio description of movies and videos, known as the Dubbing and Audio description Profile of TTML2 (DAPT).

The DAPT Requirements were previously published as a Draft Note in May 2022 and updated in October 2022.

]]>
0
W3C invites implementations of Dubbing and Audio description Profiles of TTML2 Tue, 11 Mar 2025 09:25:00 +0000 https://www.w3.org/news/2025/w3c-invites-implementations-of-dubbing-and-audio-description-profiles-of-ttml2/ https://www.w3.org/news/2025/w3c-invites-implementations-of-dubbing-and-audio-description-profiles-of-ttml2/

The Timed Text Working Group has published Dubbing and Audio description Profiles of TTML2 as a W3C Candidate Recommendation. This document defines DAPT, a TTML-based file format for the exchange of timed text content in dubbing and audio description workflows. 

Comments are welcome via GitHub issues by 8 April 2025.

]]>
0
IMSC Hypothetical Render Model is a W3C Recommendation Thu, 25 Apr 2024 03:30:00 +0000 https://www.w3.org/news/2024/imsc-hypothetical-render-model-is-a-w3c-recommendation/ https://www.w3.org/news/2024/imsc-hypothetical-render-model-is-a-w3c-recommendation/

The Timed Text Working Group published IMSC Hypothetical Render Model as a W3C Recommendation. This specification specifies a Hypothetical Render Model (HRM) that constrains the presentation complexity of documents that conform to the Text Profiles specified in any edition of Internet Media Subtitles and Captions ([IMSC]).

The objective of the HRM is to allow subtitle and caption authors and providers to verify that the content they provide does not exceed defined complexity levels, so that playback systems can render the content synchronized with the author-specified display times.

The model is not intended as a specification of the processing requirements for implementations. For instance, while the model defines glyph cache for the purpose of modelling how the number of glyph drawing operations can be reduced, it neither requires the implementation of such a cache, nor models the sub-pixel glyph positioning and anti-aliased glyph rendering that can be used to produce text output.

Furthermore, the model is not intended to constrain readability complexity.

]]>
0
IMSC Hypothetical Render Model is a W3C Proposed Recommendation Thu, 29 Feb 2024 03:00:00 +0000 https://www.w3.org/news/2024/imsc-hypothetical-render-model-is-a-w3c-proposed-recommendation/ https://www.w3.org/news/2024/imsc-hypothetical-render-model-is-a-w3c-proposed-recommendation/

Today the Timed Text Working Group published IMSC Hypothetical Render Model as a W3C Proposed Recommendation. This specification specifies an Hypothetical Render Model (HRM) that constrains the presentation complexity of documents that conform to the Text Profiles specified in any edition of Internet Media Subtitles and Captions ([IMSC]).

The model is not intended as a specification of the processing requirements for implementations. For instance, while the model defines glyph cache for the purpose of modelling how the number of glyph drawing operations can be reduced, it neither requires the implementation of such a cache, nor models the sub-pixel glyph positioning and anti-aliased glyph rendering that can be used to produce text output. Furthermore, the model is not intended to constrain readability complexity.

]]>
0
W3C Invites Implementations of IMSC Hypothetical Render Model Thu, 22 Jun 2023 00:00:00 +0000 https://www.w3.org/news/2023/w3c-invites-implementations-of-imsc-hypothetical-render-model/ https://www.w3.org/news/2023/w3c-invites-implementations-of-imsc-hypothetical-render-model/

The Timed Text Working Group invites implementations of the IMSC Hypothetical Render Model Candidate Recommendation Snapshot. This specification specifies an Hypothetical Render Model (HRM) that constrains the presentation complexity of documents that conform to any of the TTML Profiles for Internet Media Subtitles and Captions ([IMSC]). 

The model is not intended as a specification of the processing requirements for implementations. For instance, while the model defines glyph cache for the purpose of modelling how the number of glyph drawing operations can be reduced, it neither requires the implementation of such a cache, nor models the sub-pixel glyph positioning and anti-aliased glyph rendering that can be used to produce text output. Furthermore, the model is not intended to constrain readability complexity.

Comments are welcome via the GitHub issues by 20 July 2023.

]]>
0
First Public Working Draft: Dubbing and Audio description Profiles of TTML2 Tue, 25 Apr 2023 16:54:00 +0000 https://www.w3.org/news/2023/first-public-working-draft-dubbing-and-audio-description-profiles-of-ttml2/ https://www.w3.org/news/2023/first-public-working-draft-dubbing-and-audio-description-profiles-of-ttml2/

The Timed Text Working Group has published a First Public Working Draft of Dubbing and Audio description Profiles of TTML2. This specification defines DAPT, a TTML-based file format for the exchange of timed text content in dubbing and audio description workflows as typically applied when localizing or making accessible versions of videos.

Those workflows begin with transcription, allow for translation and fine tuning, known as adaptation, to result in scripts that can be used to generate audio renderings, either by recording a human voice or by using text to speech.

Those audio recordings can be associated with the scripts directly, as can mixing instructions so that an alternative audio track incorporating the recordings can be generated, either prior to distribution or in the player directly.

If the scripts are distributed to players they can be used to provide an alternative, non-audio, representation, for example on a Braille display.

DAPT is intended to meet the requirements defined in DAPT-REQS and builds on work done initially in the Audio Description Community Group.

]]>
0
Towards a Dubbing and Audio Description exchange format Thu, 12 May 2022 17:29:00 +0000 https://www.w3.org/blog/2022/towards-a-dubbing-and-audio-description-exchange-format/ https://www.w3.org/blog/2022/towards-a-dubbing-and-audio-description-exchange-format/ Nigel Megitt https://www.w3.org/blog/2022/towards-a-dubbing-and-audio-description-exchange-format/#comments Nigel Megitt

W3C has begun work on an open standard exchange format for audio description and dubbing scripts and wants interested people to review the draft requirements first published on 2022-05-10.

This post is by one of the TTWG Chairs. It's about this work and why it is important, and why now is a good time to be doing it.

As a Chair not only of the W3C’s Audio Description Community Group (open to all) and the Timed Text Working Group, but also of the EBU’s Timed Text group, I’ve been privileged to see that there’s a growing interest in both audio description and dubbing. As well as the more established vendors, there is a growing cottage industry of small, you might almost say hand-made, web based authoring tools, that each seem to use its own bespoke proprietary format for saving and loading work.

From a client perspective, this means that these tools do not interoperate with each other, and it can be hard to move from one to another. This is a classic case where an open standard exchange format would solve real needs. From conversations I have had, I believe implementers would welcome an open standard format.

From a user perspective, anything that makes it more likely to get an accessible experience, especially for users who are watching videos without necessarily seeing the images, must be a good thing. Audio Description and Dubbing are both important in this area.

Audio Description helps explain what is happening in the video image directly, in case the video content does not describe it adequately in the audio.

Dubbing is an alternative to translation subtitles: traditionally it has seemed that some countries culturally prefer one or the other, but perhaps we can make it easier for content providers to offer both and allow the user to choose.

Finally, if we can provide the script data as text content to the player, this opens up alternative renderings that are neither visible nor audible, for example using Braille displays.

The W3C Timed Text Working Group has agreed to work on creating an open standard exchange format that supports both dubbing and audio description, and has just published a first public draft Note describing the requirements that such a format needs to support.

The DAPT Requirements Note first published earlier this week, on 2022-05-10, will be used to define the Recommendation track specification, which will be a profile of TTML2.

We have published this as a draft Note because getting the requirements right at the beginning is really important, and we want everyone who is interested to review it and tell us how they can be improved.

The way we derived the requirements was to consider firstly the production workflow, then the needs of each step in that workflow, and finally break that down into a granular set of requirements, against which we can check the resulting specification.

Please do review the requirements document and feed back - the header material at the beginning of the document says how to get in touch.

]]>
1