About me

+91 9087179138

Email:

jonahimmanuel114@gmail.com

Talent Edge

Smarter Technical Screening with AI

Helping recruiting teams screen with confidence while reducing client rejections by 35%

Scroll to explore ↓

BG

My Role

Lead UX Designer

Duration

1 Month

Responsibilities

Defined the problem with Product Head and stakeholders, led research, validated feasibility and priorities early, and contributed to sprint planning.

Introduction

TalentEdge was built as part of Otomeyt AI’s vision to place high-quality engineers with companies like Infosys, Oracle, and Tech Mahindra etc.

Although Otomeyt AI operates in Singapore and Malaysia, we intentionally launched TalentEdge as an MVP focused only on the India market. The initial opportunity was estimated at ₹68 crores.


More importantly, this was just the starting point—the same AI screening model could later expand beyond engineers to many other roles, unlocking a market nearly 10× larger.

Collaborate

Since this product was built from scratch, I aligned early with key stakeholders to shape the right direction.


I worked with the CEO on business risks & differentiation, the Product Head on feasibility and AI ethics, and the Sales Head to understand client expectations and why candidates get rejected.

I also collaborated with Marketing and HR Heads for positioning and capability gaps, and spoke directly with recruiters to understand their real concerns, especially the fear of making technical judgments.

Feature Image

2 Marketing Specialist

2 Account Managers

User problem →

How it affects the Business

60 – 70%

Candidates shared with clients are rejected in L1 interviews

5 – 6 days

Increase time to hire (up from 2–3 days) by Re-screening

₹8.2 crore

Annual revenue loss due to delayed closures

Problem we trying
to solve

Recruiters are expected to evaluate technical candidates, even though most of them don’t have a technical background. So during screening calls, they can’t confidently tell whether an answer is actually correct or just well-spoken or show real depth.. So they rely on keywords and gut feeling, ask different sets of questions for the same role to different candidates, and feel low confidence in decisions. This causes inconsistent evaluations, bias, & the risk of sending poor-fit candidates to clients.

Initial beliefs

Recruiters rely on keywords &
clients expect a clear signal of candidate quality

Answering these questions required us to observe real behavior, not opinions.
That’s why the next step was to study recruiters in their real context, using contextual inquiry, which u can see below.

Shape Image
Leads to Key questions
How do recruiters screen today?
Where do they feel least confident?
How do they decide pass vs fail?
What happens after client rejection?

Research insights

Contextual inquiry

The Screening Loop Recruiters Were Stuck In

BG

From Contextual inquiry I seen a clean funnel they following as a screening process, From hiring platforms, downloading first 20–30 resumes with filters and calling around 10–12. During these calls, they ask questions from a client-provided question bank and take quick notes. But decisions rarely rely on technical accuracy. Recruiters listen for keywords, judge confidence, and sometimes paste responses into tools like ChatGPT to interpret answers.


Candidates who sound confident or give long explanations are often marked as good—even when recruiters aren’t sure the answer is correct. These shortlisted candidates are then shared with clients. When clients ask for proof of quality, recruiters submit their call notes. If candidates fail the client’s L1 interview, recruiters must quickly find replacements, repeating the same rushed process again.


This revealed a core issue:

The problem wasn’t recruiter effort—it was the system they were forced to work within.

Icon

New Assumptions

Icon

Competitive Analysis

After identifying the root cause, we formed a new assumption: recruiters don’t need to become technical experts—they need confidence, clear evaluations, and guided workflows. That’s when we explored whether AI could support the exact point where humans struggle. Working with Product & Engineering, we found AI could evaluate answer consistency, clarity, and depth of knowledge, helping where recruiters feel unsure.

The remaining question was where AI should fit in the workflow & how much control it should have—supporting human judgment without replacing it. This led to a competitive analysis of AI-driven recruitment platforms.

BG
Card
Card

What focus group study revealed

What we observed

Recruiters struggled to explain why a candidate was weak. Comparing candidates across interviews felt subjective and inconsistent.

They wanted supporting signals for confidence but still preferred making the final decision themselves.

BG
Card Image
BG
Card Image

Clear boundaries

They were uncomfortable with any systems making hiring decisions. They were also not interested in simple scores or good/bad labels.They wanted clear reasoning they could understand and explain. Without that, they didn’t trust the system. Human judgment remained non-negotiable.

Key Insight

Recruiters want support but not loss of control

BG
Card

How we defined the solution

We decided to build a technical co-pilot for recruiters.

Scores CVs against job descriptions | Evaluates candidate interview calls | Generates clear reports | Provides evidence to support candidate quality

Design

Client dashboard — single system of record

Recruiters managed client work across emails, Excel, ATS, and notes, so progress mostly lived in their heads. We introduced the Client Dashboard — a single place with structured client cards and auto-updated counts. Now recruiters can quickly see openings, screened candidates, and overall progress.

Testing & validation

Once we had the initial solution in place, I tested it with recruiters to validate whether it actually worked in real screening scenarios.

Early positives

Recruiters immediately liked having all candidate information in one place.


Specially the AI-generated reports gave them more confidence than their usual notes now they felt they can send to client with evidence and updating candidate status inside the flow felt more structured than their existing tools.


What Didn’t Work

Still needed paper & other medium to take notes

One major issue we saw was around note-taking. Even with the system, some recruiters preferred writing notes on paper or Excel instead of the chat. They worried that if they pressed Enter, the AI respond and interrupt their focus during the call. Another concern was that if they had a long conversation with the AI, their notes could get buried in the chat, making them harder to find later.

What Didn’t Work

Status updates took too many steps, and switching between candidates was difficult.

Updating candidate status was slow. Recruiters had to open the profile and come back each time, which felt tiring during back-to-back calls. Switching between candidates also took too many steps.

What Didn’t Work

Screening questions get lost in chat

One subtle but important issue was inside the chat itself.After long conversations, screening questions, reports or audios moved far up in the chat.

Final changes

Chat-style candidate navigation

Earlier, updating status or switching candidates required opening profiles and going back.

We simplified this by bringing everything into one screen with a tab-based layout.

Recruiters can switch between candidates instantly from the list, similar to chat apps they already use. This matches their existing mental model and removes unnecessary navigation. As a result, updating status and switching candidates feels fast, natural, and effortless, even during busy screening hours.

BG

Before

After

Smart pinning & built-in notes

After long chats, screening questions moved up, forcing recruiters to scroll. Pressing Enter in notes also triggered AI accidentally, disrupting the flow.


To fix this, screening questions, audio, and AI reports are auto-pinned at the top. I also added built-in notes inside the candidate profile, keeping everything in one place.

This removed constant interruptions and made screening faster and smoother.

BG

Before

After

Impact

We tracked results through Operations and Sales dashboards.

60% of recruiters shared qualified candidates within 2 days, improving delivery speed.

Client rejection rates also dropped from 65% to 35%, showing better candidate–role alignment and stronger screening quality.

This led to faster closures, higher recruiter confidence, and greater client trust.

About me

+91 9087179138

Email:

jonahimmanuel114@gmail.com

About me

+91 9087179138

Email:

About me

+91 9087179138

Email:

jonahimmanuel114@gmail.com

Before you go…

Back to top

Back to top

Think I could be a good fit?

Let’s discuss how I can contribute to your team

Let’s make an impact. Reach out anytime.

Jonah Immanuel

Product Designer

Contact me

jonahimmanuel114@gmail.com

If you’d like to know more,
happy to connect for a screening call.

+91 9087179138

Don’t be a stranger. Come back anytime.

Copyright © JonahImmanuel, 2026

Before you go…

Back to top

Back to top

Think I could be a good fit?

Let’s discuss how I can

contribute to your team

Let’s make an impact. Reach out anytime.

Jonah Immanuel

Product Designer

Contact me

jonahimmanuel114@gmail.com

If you’d like to know more,
happy to connect for a screening call.

+91 9087179138

Don’t be a stranger. Come back anytime.

Copyright © JonahImmanuel, 2026

Before you go…

Back to top

Back to top

Think I could be a good fit?

Let’s discuss how I can contribute to

your team

Let’s make an impact.
Reach out anytime.

Jonah Immanuel

Product Designer

Contact me

jonahimmanuel114@gmail.com

If you’d like to know more,
happy to connect for a screening call.

+91 9087179138

Don’t be a stranger.
Come back anytime.

Copyright © JonahImmanuel, 2026