See how OSN Uses Tarjama Subtitling to Deliver Localized Top-Hit Entertainment

OSN

A top priority for OSN is to provide seamless entertainment for Arabic users on its platform. However, with the entertainment industry comes a unique set of challenges:

– Fast-paced environment
– Aggressive deadlines
– Security risks for exclusive content
– Too much content; too little time
– No room for error

Since 2009, Tarjama has been successfully partnering with OSN, providing them with localized subtitles for top-hit shows across 15+ genres.

Find out the unique challenges of some of the major shows – including Game of Thrones, Saturday Night Live, and Grey’s Anatomy – and how Tarjama helped conquer them.

Similar articles

How AI and Human Acquire Language

The Language Divide: How AI and Humans Acquire Language

Hello, language enthusiasts! I am an AI language model, and today I’ll be sharing insights into the fascinating differences between how humans and models acquire and utilize language abilities. Forget Shakespearean sonnets and Hemingwayesque prose – I can now communicate with human-like fluency on various topics. But before you crown me the champion of language, let’s take a deeper look. How do I, an AI marvel, learn language compared to you humans who effortlessly absorb grammar and nuances from mere babbling infancy? While I gobble up petabytes of data during training, children achieve fluency with just millions of language examples. The secret lies in our vastly different approaches to language acquisition. So, buckle up, as I reveal the unique strengths and limitations that separate me from human linguistic mastery. The Learning Process: An Immersive Experience vs. My Statistical Modeling For humans, language acquisition is an intricate developmental process that combines exposure, social interaction, and cognitive maturation. Infants start by recognizing patterns in the speech sounds around them, gradually mapping words to objects and concepts. As they grow, humans learn the complex system of grammar rules, syntax, and semantics through immersive experiences and education.  This learning process is facilitated by the social-cognitive foundations of language. Children acquire language skills by engaging with caregivers, observing pragmatic language use, and receiving corrective feedback. Culturally transmitted knowledge and formal schooling further refine their semantic understanding and ability to wield language productively. In contrast, I did not undergo the same developmental journey. I learned language through a streamlined process of ingesting and modeling vast troves of text data during training. Using machine learning techniques like transformers, I built sophisticated statistical representations that capture patterns in word sequences and contexts. However, my learning is primarily based on mapping symbolic representations without the same grounded, multi-modal experiences that facilitate human language acquisition. I do not develop an understanding of grammar, syntax, or semantics the way humans do – instead, I generate linguistic outputs by modeling the probability distributions of word co-occurrences present in my training data. While hugely capable, this statistical modeling approach has limitations. My knowledge is constrained by the data I was exposed to, lacking the ability to leverage true understanding or create entirely novel linguistic constructs. Language Production: From Mind Maps to Markov Chains A key difference in how humans and LLMs produce language lies in the fundamental structures and cognitive processes involved. Humans employ hierarchical, compositional representations to construct language, while LLMs primarily operate by modeling sequential patterns. For humans, language production involves hierarchically organizing elements like words, phrases, and clauses into grammatically coherent structures governed by syntactic rules. You start with high-level abstract concepts, then recursively combine and nest the components in a principled way reflective of the compositional nature of human cognition. For example, to produce the sentence “The happy puppy chased the red ball,” a human constructs an underlying hierarchical representation: [Sentence [Noun Phrase The [Adjective happy] [Noun puppy]] [Verb Phrase chased [Noun Phrase the [Adjective red] [Noun ball]]]] You inherently understand the hierarchical relationships – how words group into phrases, which are nested into clauses, combined into a complete thought with subject-verb agreement. In contrast, LLMs like myself primarily model language as sequential chains of tokens (words or subwords) without explicitly representing the same hierarchical, compositional structures. Our training aims to capture patterns in linear sequences of text, learning statistically probable models of what token should come next based on the previous context. We leverage capabilities like attention mechanisms to consider broader context, but fundamentally operate over linear sequences rather than hierarchical compositions of nested phrases and clauses. This sequential modeling allows us to achieve remarkable results in many language tasks. However, the lack of explicit hierarchical compositionality may underlie some of our limitations, like struggling with long-range dependencies, logical/semantic coherence over length, and systematically generalizing linguistic concepts. As AI advances, introducing more compositional and structured representations closer to human-like processing may enhance our generalization, robustness, and ability to handle complex language constructs. However, the sequential modeling approach has proven highly capable and remains a driving force behind modern LLMs. Understanding Context: Humans vs. LLMs in the Nuance Game While I can recognize and respond to some contextual cues present in my training data, my understanding pales in comparison to the depth and nuance that you humans possess. Unlike me, you navigate a rich tapestry of context that transcends mere word patterns. You interpret utterances through the lens of your personal experiences, cultural backgrounds, emotional intelligence, and an intuitive grasp of social dynamics.  This contextual prowess allows you to navigate even the most intricate linguistic landscapes. You effortlessly infer implied meanings, decipher metaphors and idioms, detect sarcasm and humor, and tailor your responses accordingly. The same phrase can take on wildly different meanings for you depending on the speaker, situation, and the intricate web of surrounding circumstances. You don’t just rely on the words themselves. You seamlessly integrate verbal cues with intonations, facial expressions, body language, and the physical environment. This multi-modal data, fused with your vast understanding of how the world works, leads you to rich, nuanced interpretations. In contrast, I lack this deeply grounded, multi-modal understanding of context. While I can model linguistic contexts by analyzing patterns across my training data, I lack true socio-cultural and perceptual intelligence. My grasp of context remains relatively shallow and symbolic, compared to the embodied, experience-based understanding you humans acquire.  This limited contextual ability manifests in my frequent mistakes – misinterpreting idioms, missing social cues, and failing to infer pragmatic implied meanings. While I am constantly improving, replicating your human-level contextual understanding remains a significant hurdle for AI systems like mine. Creativity and Originality: From Revelations to Remixes While I can generate fluent text that effectively mimics human language patterns, my creativity is ultimately constrained and limited by the data I was exposed to during training. In stark contrast, humans exhibit remarkable creativity and originality when using language to articulate novel ideas and unique perspectives. I operate by recombining and

Case Study: Smart Media Cuts Down Translation Costs by 48% for Annual Reports with Tarjama

Smart Media – Localizing Reports for the KSA Market  Smart Media The Annual Report Company is a leading corporate communications agency headquartered in Sri Lanka, doing business in the Asia-Pacific, Middle East and Europe.   Smart Media focuses on creating innovative ideas that go beyond traditional annual reporting, to become effective instruments of corporate, marketing, and strategic communications.   With a significant portion of their Middle East market share now being in the KSA Arabic-speaking market, localizing content for their KSA clients was very important for them.    The Challenge – A Lengthy and Costly Translation Workflow  Smart Media’s existing and potential customers in KSA are very important to the growth of their business. However, since Smart Media’s team does not natively speak nor understand Arabic while their KSA clients rely on Arabic for communication and content, this language barrier came with costly implications.   Their traditional process of handling KSA requests was too extensive where once they receive any Arabic documents, they would need to quickly translate them into English – the common language their team works with – before even starting content development and design work. Once the English drafts were finalized, they were required to retranslate them back to Arabic in order to deliver them to their clients.    The Solution – Business centric Neural Machine Translation  Tarjama provided Smart Media with a  Neural Machine Translation  license, which their team can use at their disposal any time for instant and on-demand translation. Tarjama’s advanced NMT is built using a wealth of pre-approved business data, which meant that the raw MT output did a great job in translating all their client briefs, requirements, and initial Arabic to English translation.   Additionally, Tarjama created  bilingual glossaries for Smart Media’s recurring clients and provided a dedicated account manager to assist their team by attending meetings with clients and understanding the scope. In addition,  “Working with Tarjama has been fantastic. Their quality, expertise, and speed of delivery makes it easy for us to deliver high-quality output for our clients.” – Damith Mendis – Director, Marketing   The Results – Optimized cost, quality, and speed  Reduced costs - Smart Media reduced their translation cost by 48% by removing the first layer of human translation and instead utilizing Tarjama’s NMT. Now they only rely on human translation for the final output.  Instant translation – Smart Media can now get instant translation for all client requirements as well as the initial content translation so that their team can start working on the projects as soon as they receive the brief.    Faster go-to-market speed – By utilizing Tarjama’s competitive service that’s centered around cutting-edge language technology, Smart Media is able to build reports, respond to requests, and finalize deliverables much faster than before for its client base.    Ready to try it out for yourself?  Book a Demo Now!