“Teaching Is Intellectual Creation. Just As Musicians And Authors Earn Royalties, Professors Should Too,”- Pratham Mittal, Masters’ Union

At the crossroads of education, electronics, and artificial intelligence, an AI tutor platform by Masters’ Union is reimagining knowledge delivery by blending accessibility with faculty rights and sustainable monetisation. A new era for online learning? Founder Pratham Mittal shares the journey and strategy with EFY’s Akanksha Sondhi Gaur.


Q. Could you briefly explain the core concept behind the AI-driven tutor platform?

A. The idea is simple yet disruptive: an AI-driven tutor platform where students anywhere in the world can access full MBA-level courses. Unlike pre-recorded lectures, the learning experience is delivered through interactive AI (artificial intelligence) avatars that engage in real-time, one-to-one sessions. Students can choose from multiple subject-specific tutors; several for marketing, finance, strategy, and more, each trained extensively on the teachings of real professors. The AI does not just deliver content; it analyses individual challenges and adapts instruction, creating a hyper-personalised experience that closely simulates learning directly from a faculty member.

Q. Who is your target audience, and how are you positioning the brand?

A. Our initial focus is MBA and executive learners, though the model is scalable to undergraduate and K–12 education in the future.

- Advertisement -

Q. What inspired you to launch the world’s first AI royalty model for educators?

A. We wanted to rethink the role of faculty in digital education. On most platforms, professors are treated like contractors and lose control over their work. In our model, educators retain ownership of their intellectual property (IP) and earn royalties whenever their AI avatar teaches a student. This makes them true stakeholders. At the same time, advancements in AI now allow us to build a genuinely hyper-personalised learning system by combining open-source models with our proprietary innovations. The royalty framework ensures that teachers, whose content is often freely consumed without compensation, can finally monetise their expertise in a scalable, sustainable way; something we believe is a real disruption in the industry.

Q. How does this align with your larger goal of democratising global business education?

A. Education should not be limited by geography or classroom size. Our model ensures that a student in a small town can access the same quality of instruction as one in a global capital. By pairing scalable technology with fair compensation for educators, we make personalised, one-on-one tutoring, which was once reserved for the privileged few, available to anyone. This individual attention from AI tutors redefines higher education, making quality MBA learning accessible and democratic on a global scale.

Q. Do you see this model setting a new benchmark for institutions worldwide?

A. We believe this hybrid of AI and human intellectual property can reshape massive open online courses (MOOCs) and higher education as a whole. Traditional MOOCs, with their passive video format, saw completion rates of less than 1%. In contrast, AI tutors create an interactive, conversational learning experience, like having a knowledgeable friend who remembers past interactions, tracks strengths and weaknesses, and adapts guidance accordingly. This makes world-class faculty accessible anytime, reducing reliance on costly personal tutors. Classroom sizes have historically shrunk from 200 to 60 to 30 students, but with AI tutors, learning can now be one-on-one. This shift transforms the traditional one-to-many class model into a personalised, dynamic setting, setting a new benchmark for scalable, high-quality education.

Q. How does the interactive avatar experience differ from traditional recorded lectures?

A. Unlike static video lectures, our AI-driven avatars are fully interactive. They converse with students in real time, adapt to learning pace, identify gaps, and provide personalised guidance, replicating the feel of a live classroom.

Q. Could you walk us through the AI technology stack and core architecture?

A. At the backend, the system combines large language models with proprietary training pipelines fine-tuned on faculty lectures, case studies, and teaching styles. It runs on proprietary SP3 databases hosted on AWS and related cloud services. The middleware integrates multiple AI models, including OpenAI, LLaMA, and Eleven Labs, for conversational intelligence and speech synthesis. The proprietary frontend app, developed in-house, delivers low-latency, real-time interactions, enabling personalised teaching at scale.

Q. How do the avatars manage to capture not just content but also the pedagogy and reasoning of professors?

A. We train the avatars not only on textual data but also on the faculty’s tone, cadence, and reasoning patterns, allowing them to capture both what is taught and how it is taught. Model training is based primarily on classroom video recordings, enabling replication of the instructor’s voice, style, and pedagogy. On the frontend, we have built our own app, fully owned as IP. For voice modelling, we integrate specialised AI from partners such as Eleven Labs to capture speech intonation and flow, while proprietary language models ensure authenticity in responses and delivery.

Q. Why was it important to position educators as IP holders instead of employees?

A. Teaching is intellectual creation. Just as musicians and authors earn royalties, professors should too. By treating educators’ work as IP, AI tutors carry their expertise while ensuring faculty receive credit and financial benefit. This model empowers educators, recognising their contributions and allowing them to monetise their digital likeness alongside their role as employees.

Q. How does the royalty payout work, and how is it scaled as learner numbers grow?

A. We use a ‘pay-per-use’ model, similar to Spotify, where faculty earn royalties each time a student interacts with their avatar. Earnings scale with the course’s reach and impact, supplementing their base salary. Participation is optional, though most faculty opt in.

Q. Could this create a new career model for subject experts?

A. Yes. Imagine a finance expert building an AI teaching portfolio and reaching millions worldwide without being tied to a single institution. It is a whole new career possibility. This could define a new trajectory for educators globally.

Q. Can educators opt out or monetise their avatars independently?

A. Faculty have full flexibility to decide how and where their avatars are used. Participation is voluntary and currently limited to in-house faculty, who can opt in or out at any time.

Q. How have faculty responded to digitising their teaching style?

A. Faculty see it as a way to amplify their impact globally while retaining recognition and ownership. Currently, they rate the experience around six out of 10, with expectations to rise to seven or eight as the technology evolves.

Q. How do you balance accessibility with revenue models?

A. We use tiered pricing to keep the platform affordable for students in emerging markets while offering premium features for advanced learners. The AI is cost-efficient, allowing free access initially, with future subscription-based premium services.

Q. What is the balance between proprietary and open-source components in your AI stack?

A. It is about a 50-50 split: we use open-source models as the foundation, but our proprietary layers: training data, moderation, custom UI, visuals, and deployment, form the core differentiation.

Q. And on the electronics side, what is powering these avatars?

A. The system is fully cloud-based, leveraging AWS S3 as the core backend, with video processing and voice synthesis powered by platforms such as LLaMA and Eleven Labs. No custom digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or edge hardware are currently used; reliability and scalability come from the cloud. The service operates on standard web and mobile apps, requiring a stable 4G connection. Low-bitrate streaming, Internet of Things (IoT) integration, and dedicated devices are not yet supported but may be explored in the future.

Q. Could innovations like neuromorphic chips or quantum computing play a role down the line?

A. Definitely. Neuromorphic chips can make the system far more energy-efficient, while quantum computing could unlock personalised learning simulations at an unprecedented scale. While there is potential for these emerging electronics to impact AI at scale someday, their practical adoption is still speculative.

Q. Can learners expect mentorship or case-based learning as well?

A. The avatars are trained on case libraries and real-world examples, making sessions engaging and practical. Students can ask questions anytime, giving them the experience of a personal mentor at all times.

Q. How does the system handle real-time queries at scale?

A. Our avatars run on AWS S3, using layered AI orchestration with high-performance computing and intelligent caching to ensure seamless, low-latency interactions, even for thousands of simultaneous users. All backend processing, from data upload to avatar responses, happens smoothly in the cloud.

Q. What safeguards are in place to prevent misinformation, bias, or hallucination?

A. We use rigorous moderation pipelines, real-time fact-checking, and human-in-the-loop verification for sensitive topics. All AI is trained exclusively on proprietary faculty content and instructions, with strict restrictions preventing it from generating information outside this database. This minimises hallucinations or misinformation. Access is limited to enrolled students via web or mobile apps, and cloud infrastructure ensures security. Because faculty train the models directly, they retain full control over avatar responses, ensuring accuracy in specialised subjects such as strategy or venture capital.

Q. How is faculty data and likeness protected?

A. Avatars are watermarked, encrypted, and streamed-only, preventing download, cloning, or deepfake misuse. Multiple security layers protect sensitive faculty data and safeguard IP.

Q. Will you expand into regional languages or global markets?

A. Multilingual capability is on our roadmap to make quality business education available in major languages, though large-scale localisation is still in development.

Q. Are you collaborating with other universities or corporations?

A. Partnership discussions are ongoing. Initially, we are testing with our own faculty, with plans to eventually allow other institutions to integrate their faculty onto the platform.

Q. What metrics define success for you?

A. Key metrics are engagement, learning outcomes, and faculty earnings, with course completion rate being the most important, maintaining over 30–40%, well above MOOC benchmarks.

Q. Beyond today’s platform, what is next?

A. We are exploring multimodal AI, including augmented reality (AR) and virtual reality (VR) classrooms, holograms, IoT, and haptic feedback, aiming for immersive, interactive, borderless learning experiences.

Q. Any final insights for the wider electronics and engineering community?

A. AI education depends on deep electronics innovation; DSPs, accelerators, edge devices, and neuromorphic chips. Engineers can significantly shape the classrooms of tomorrow, making education more accessible, effective, and engaging worldwide.


- Advertisement -
Akanksha Gaur
Akanksha Gaur
Akanksha Sondhi Gaur is a journalist at EFY. She has a German patent and brings a robust blend of 7 years of industrial & academic prowess to the table. Passionate about electronics, she has penned numerous research papers showcasing her expertise and keen insight.

Industry's Buzz

Learn From Leaders

Startups