About me

+91 9087179138

Email:

jonahimmanuel114@gmail.com

Talent Edge

Smarter Technical Screening with AI

Smarter Technical Screening with AI

Smarter Technical Screening with AI

Helping recruiting teams screen with confidence while reducing client rejections by 35%

Helping recruiting teams screen with confidence while reducing client rejections by 35%

Scroll to explore ↓

Scroll to explore ↓

BG

My Role

Lead UX Designer

Duration

3 Month

Responsibilities

Defined the problem with Product Head and stakeholders, led research, validated feasibility and priorities early, and contributed to sprint planning.

Introduction

TalentEdge was built as part of Otomeyt AI’s vision to place high-quality engineers with companies like Infosys, Oracle, and Tech Mahindra etc.

Although Otomeyt AI operates in Singapore and Malaysia, we intentionally launched TalentEdge as an MVP focused only on the India market. The initial opportunity was estimated at ₹68 crores.


More importantly, this was just the starting point—the same AI screening model could later expand beyond engineers to many other roles, unlocking a market nearly 10× larger.

Key Results

By introducing structured, AI-supported screening, recruiters were able to share qualified candidates within 2 days, while client rejection rates dropped from 70% to 35%, indicating stronger candidate–role alignment and more confident.


How did we achieve this? Scroll to explore ↓

User problem →

How it affects the Business

60 – 70%

Candidates shared with clients are rejected in L1 interviews

5 – 6 days

Increase time to hire (up from 2–3 days) by Re-screening

₹8.2 crore

Annual revenue loss due to delayed closures

Problem we trying
to solve

Recruiters are expected to evaluate technical candidates, even though most of them don’t have a technical background. So during screening calls, they can’t confidently tell whether an answer is actually correct or just well-spoken or show real depth.. So they rely on keywords and gut feeling, ask different sets of questions for the same role to different candidates, and feel low confidence in decisions. This causes inconsistent evaluations, bias, & the risk of sending poor-fit candidates to clients.

Problem we trying
to solve

Problem we trying to solve

Recruiters are expected to evaluate technical candidates, even though most of them don’t have a technical background. So during screening calls, they can’t confidently tell whether an answer is actually correct or just well-spoken or show real depth.. So they rely on keywords and gut feeling, ask different sets of questions for the same role to different candidates, and feel low confidence in decisions. This causes inconsistent evaluations, bias, & the risk of sending poor-fit candidates to clients.

User problem →

How it affects the Business

User problem →

How it affects the Business

60 – 70%

Candidates shared with clients are rejected in L1 interviews

5 – 6 days

Increase time to hire (up from 3 days) by Re-screening

₹8.2 crore

Annual revenue loss due to delayed closures

Business Goal

By Q3, the goal was to reduce candidate rejection from ~ 65–70% to 50% by improving early-stage technical evaluation clarity and recruiter decision confidence.

Problem we trying to solve

Objective

Improve submission quality during recruiter screening

Problem we trying to solve

Executive Intent Alignment

Executive Intent Alignment

Since TalentEdge was being built from scratch, I aligned early with leadership stakeholders to define the product’s purpose, risks, and long-term direction before starting design exploration.

Feature Image
Feature Image

2 Marketing Specialist

2 Account Managers

In collaboration with the above teams

I worked closely with:

CEO: To understand placement-level business risks and differentiation opportunities
Product Head : To evaluate feasibility and responsible use of AI in screening workflows
Sales Head: To understand client expectations and why submitted candidates get rejected
Marketing & HR leaders: To identify positioning gaps and recruiter capability challenges
Recruiters: To understand real screening pressures and hesitation around technical decision-making

These conversations helped clarify three foundational elements.

Purpose

Support recruiters in making stronger early-stage technical screening decisions without requiring them to become technical experts.

Vision

Reduce downstream candidate rejection by improving evaluation clarity at the recruiter screening stage.

Strategic Priorities Defined with Leadership

Improve screening accuracy before client submission
Increase recruiter confidence during technical filtering

Enable structured candidate justification for client conversations
Avoid introducing additional workflow complexity

Clarifying What Success Looks Like

Reduction in rejection after recruiter screening
Faster candidate submission timelines

Improved recruiter confidence during evaluation
Higher client acceptance of submitted candidates

Success Metrics

I converted these success goals into trackable metrics.

Post-screening rejection rate

Candidate submission time

Recruiter decision confidence

Client acceptance rate

Alignment Outcome

Across stakeholders, one direction became clear:

TalentEdge should function as a decision-support layer inside recruiter workflows, not as a replacement screening system or standalone technical evaluation tool.

This alignment shaped the experience strategy before research and solution exploration began.

MVP Scope

what should be built for the MVP versus what could be deferred to future of V2.

we evaluated decisions based on three factors: impact on screening quality, contribution to recruiter confidence, and feasibility within the MVP timeline.

What We Prioritized (Must to have)

◍ Supporting recruiters in understanding candidate relevance to job expectations
◍ Providing structured, explainable evaluation outputs
◍ Enabling faster and more confident candidate submissions

What We Deferred (V2)

◍ Fully guided or structured interview flows across end-to-end hiring stages
◍ Enabling actions like initiating calls directly within the platform

Design Strategy

Since TalentEdge was built from scratch, the early stage of the project was highly ambiguous.

Before defining features or solutions, I created a design strategy to connect leadership goals with experience direction and identify where the product should create impact.

Critical unanswered questions such as:

Where should this product create impact?

Which assumptions needed validation?

How should design contribute to reducing rejection rates?

What knowledge gaps must be resolved before solution decisions?

This created a shared decision-making framework before moving into research.

Business Goal Alignment (Q3 Target)

Reduce candidate rejection after recruiter screening from ~65–70% to below 50%


Supporting signals:

• Improve recruiter confidence during filtering
• Reduce submission turnaround time
• Increase client acceptance rate of shortlisted candidates

BG

Strategic Positioning Direction

Based on leadership alignment, the product direction was framed as:

Supporting recruiters during technical candidate screening decisions without increasing workflow complexity

This positioned TalentEdge as a decision-support product inside recruiter workflows rather than a replacement workflow system.

BG

Strategy

Hypothesis

Leadership discussions revealed an important assumption:

Recruiters often rely on resume keywords while clients expect clearer signals of candidate quality before submission.


Based on this, I framed the working hypothesis:


If recruiters receive better support while interpreting candidate relevance during screening, submission quality can improve and downstream rejection can reduce.


This hypothesis guided the investigation plan.

BG

Capability

Direction

(Exploration Scope Defined with Stakeholders)

Before research began, stakeholders aligned on three screening-stage areas where experience intervention might create impact.


These were exploration directions, not solutions:

• Understanding candidate relevance to job expectations
• Supporting recruiter confidence during screening
• Improving clarity while explaining candidate fit to clients

BG

Constraints

Considered Early

Since adoption depended on recruiter workflows, several constraints were identified during strategy framing:


• Recruiters are not technical experts

• Additional tools increase resistance

• Cant change major architecture because exiting calling tools already tie in contract

BG

Knowledge Gaps Identified

Since TalentEdge was a new product initiative, several unknowns existed across stakeholders, users, and the market.

Stakeholder Knowledge Gaps

◍ How recruiters currently evaluate technical candidates
◍ Why submissions fail after screening
◍ Where confidence drops during filtering decisions

User Knowledge Gaps

◍ How recruiters interpret resumes at scale
◍ How do they decide pass vs fail?
◍ What signals recruiters trust while shortlisting candidates

Market Knowledge Gaps

◍ How existing platforms support recruiter-led technical filtering
◍ Where evaluation support gaps exist
◍ What opportunities exist for differentiation

BG

Strategy Execution Plan

Strategy Execution Plan

To reduce uncertainty before defining experience directions, I created a staged investigation plan.

Each phase was designed to answer a specific knowledge gap identified earlier.
Phase 1 — Contextual Enquiry
Understand how recruiters currently screen technical candidates
Phase 2 — Competitive Workflow Analysis
Understand how existing platforms support recruiter-led technical filtering and identify opportunity areas
Phase 3 — Validation Planning
Based on findings, validation methods would be selected to test whether identified opportunity areas could realistically support recruiter decision-making.

Research insights

Research insights

Contextual inquiry

Contextual inquiry

The Screening Loop Recruiters Were Stuck In

The Screening Loop Recruiters Were Stuck In

The Screening Loop Recruiters Were Stuck In

BG

From Contextual inquiry I seen a clean funnel they following as a screening process, From hiring platforms, downloading first 20–30 resumes with filters and calling around 10–12. During these calls, they ask questions from a client-provided question bank and take quick notes. But decisions rarely rely on technical accuracy. Recruiters listen for keywords, judge confidence, and sometimes paste responses into tools like ChatGPT to interpret answers.


Candidates who sound confident or give long explanations are often marked as good—even when recruiters aren’t sure the answer is correct. These shortlisted candidates are then shared with clients. When clients ask for proof of quality, recruiters submit their call notes. If candidates fail the client’s L1 interview, recruiters must quickly find replacements, repeating the same rushed process again.


This revealed a core issue:

The problem wasn’t recruiter effort—it was the system they were forced to work within.

Icon

New Assumptions

Icon

Competitive Analysis

After identifying the root cause, we formed a new assumption: recruiters don’t need to become technical experts—they need confidence, clear evaluations, and guided workflows. That’s when we explored whether AI could support the exact point where humans struggle. Working with Product & Engineering, we found AI could evaluate answer consistency, clarity, and depth of knowledge, helping where recruiters feel unsure.

The remaining question was where AI should fit in the workflow & how much control it should have—supporting human judgment without replacing it. This led to a competitive analysis of AI-driven recruitment platforms.

BG
Card
Card
Icon

New Assumptions

Icon

Competitive Analysis

After identifying the root cause, we formed a key assumption: recruiters don’t need to become technical experts—they need confidence, clear evaluations, and guided workflows. That’s when we explored whether AI could support the exact point where humans struggle. Working with Product & Engineering, we found AI could evaluate answer consistency, clarity, and depth of knowledge, helping where recruiters feel unsure. The remaining question was where AI should fit in the workflow & how much control it should have—supporting human judgment without replacing it. This led to a competitive analysis of AI-driven recruitment platforms.

BG
Card
Card
Icon

New Assumptions

Icon

Competitive Analysis

After identifying the root cause, we formed a key assumption: recruiters don’t need to become technical experts—they need confidence, clear evaluations, and guided workflows. That’s when we explored whether AI could support the exact point where humans struggle. Working with Product & Engineering, we found AI could evaluate answer consistency, clarity, and depth of knowledge, helping where recruiters feel unsure. The remaining question was where AI should fit in the workflow & how much control it should have—supporting human judgment without replacing it. This led to a competitive analysis of AI-driven recruitment platforms.

BG
Card
Card

What focus group study revealed

What we observed

What we observed

Recruiters struggled to explain why a candidate was weak. Comparing candidates across interviews felt subjective and inconsistent.

They wanted supporting signals for confidence but still preferred making the final decision themselves.

Recruiters struggled to explain why a candidate was weak. Comparing candidates across interviews felt subjective and inconsistent.

They wanted supporting signals for confidence but still preferred making the final decision themselves.

BG
Card Image
BG
Card Image

Clear boundaries

Clear boundaries

They were uncomfortable with any systems making hiring decisions. They were also not interested in simple scores or good/bad labels.They wanted clear reasoning they could understand and explain. Without that, they didn’t trust the system. Human judgment remained non-negotiable.

They were uncomfortable with any systems making hiring decisions. They were also not interested in simple scores or good/bad labels. They wanted clear reasoning they could understand and explain. Without that, they didn’t trust the system. Human judgment remained non-negotiable.

Key Insight

Recruiters want support but not loss of control

Recruiters want support but not loss of control

BG
Card

How we defined the solution

How we defined the solution

I used a frame work called tension. I derived tensions by looking for conflicts between user behavior and desired outcomes.

◍ From research, I saw that recruiters prefer conducting interviews in a natural, conversational way — that’s how they think and evaluate candidates.

◍ But at the same time, they were struggling with consistency and clarity in decision-making, which requires structure.

So that created a clear tension:

👉 Natural conversation vs structured evaluation.

I use this method often — I look for situations where two valid needs can’t be fully satisfied at the same time. Those tensions help define the solution space.

  • ⚖️ Tension 1


    📍User behavior

    • Recruiters work in a conversational, unstructured way

    • But the output needs to be structured

    → This led to the tension: Natural flow vs structured evaluation


  • ⚖️ Tension 2


    📍User insights

    • Recruiters want AI support but don’t want to lose control


    → This led to the tension: AI assistance vs human control


  • ⚖️ Tension 3


    📍Goals

    • Hiring is slow, but decisions need to be accurate


    → This led to the tension: Speed vs confidence


Solution Mapping

Solution Mapping

To understand the design space, we explored multiple directions by intentionally pushing different sides of these tensions of approaches to achieve the defined capabilities in different ways.

Icon

Solution 1

Icon

Solution 2

Icon

Solution 3

Real-time AI Co-pilot

AI actively supports recruiters during live screening by:

◍ Providing real-time evaluation signals

◍ Suggesting follow-up questions

◍ Generating instant insights

Capability Focus


  • Faster understanding of candidate relevance

  • Immediate decision support


Cons:


  • Increased intrusion during conversations

  • Reduced trust due to real-time AI influence

  • High technical complexity for MVP

Instead of fully committing to a single direction, we combined the most valuable aspects from multiple explorations:

What We Included:


  • Analyzing interview recordings and inputs
    → To extract structured insights without interrupting the conversation

  • Generating clear, explainable evaluation outputs
    → To improve recruiter confidence and client communication

  • Mapping JD vs CV (relevance signals)
    → To help recruiters quickly understand candidate fit

  • Providing suggested screening questions
    → To support, not enforce, structured interviews

🎯 Why This Direction

Instead of optimizing for a single dimension, we composed a solution that balances assistance, flexibility, and feasibility.

This direction was chosen because it:


  • Preserves natural recruiter workflows (no interruption during conversations)

  • Introduces structure at the decision stage, where clarity matters most

  • Enhances confidence without enforcing rigid systems

  • Delivers maximum value within MVP feasibility constraints

Key Trade-off

Key Trade-off

We deliberately combined post-interaction intelligence + lightweight guidance + relevance signals,
while trading off real-time assistance and fully structured interview control.

What we prioritized:

Clarity in evaluation
Recruiter confidence

Ease of adoption
Feasibility for MVP

What we consciously gave up:

Instant, real-time AI feedback

Fully guided or standardized interview flows

Maximum automation

🧩 Outcome on Product Experience

These trade-offs directly shaped the final product:

AI operates asynchronously (after interaction)

  • Recruiters maintain full control over conversations

  • System provides structured insights without enforcing structure

  • Screening becomes clearer, faster, and more explainable

  • AI acts as a decision-support layer, not a replacement system

Design

Client dashboard — single system of record

Recruiters managed client work across emails, Excel, ATS, and notes, so progress mostly lived in their heads. We introduced the Client Dashboard — a single place with structured client cards and auto-updated counts. Now recruiters can quickly see openings, screened candidates, and overall progress. This improved pipeline visibility helped recruiters stay aligned on role priorities and screening status, enabling more structured coordination before submissions were made to clients.

Testing & validation

Testing & validation

Once we had the initial solution in place, I tested it with recruiters to validate whether it actually worked in real screening scenarios.

Early positives

5 out of 8 recruiters (62%) mentioned they still preferred taking notes in a notebook during live calls, indicating a continued reliance on familiar behaviors during conversations.

8 out of 8 recruiters (100%) said the AI-generated report helped them better understand candidate fit and increased their confidence while submitting candidates to clients.

7 out of 8 recruiters (87%) found that AI-generated questions, reports, and centralized candidate information reduced cognitive load and made the screening process easier to manage.

The testing confirmed that recruiters valued AI as a supporting layer for clarity and increased their confidence.

What Didn’t Work

Still needed paper & other medium to take notes

One major issue we saw was around note-taking. Even with the system, some recruiters preferred writing notes on paper or Excel instead of the chat. They worried that if they pressed Enter, the AI respond and interrupt their focus during the call. Another concern was that if they had a long conversation with the AI, their notes could get buried in the chat, making them harder to find later.

What Didn’t Work

Status updates took too many steps, and switching between candidates was difficult.

Updating candidate status was slow. Recruiters had to open the profile and come back each time, which felt tiring during back-to-back calls. Switching between candidates also took too many steps.

What Didn’t Work

Screening questions get lost in chat

One subtle but important issue was inside the chat itself.After long conversations, screening questions, reports or audios moved far up in the chat.

Final changes

Chat-style candidate navigation

Chat-style candidate navigation

Earlier, updating status or switching candidates required opening profiles and going back.

We simplified this by bringing everything into one screen with a tab-based layout.

Recruiters can switch between candidates instantly from the list, similar to chat apps they already use. This matches their existing mental model and removes unnecessary navigation. As a result, updating status and switching candidates feels fast, natural, and effortless, even during busy screening hours.

Earlier, updating status or switching candidates required opening profiles and going back. We simplified this by bringing everything into one screen with a tab-based layout.

Recruiters can switch between candidates instantly from the list, similar to chat apps they already use. This matches their existing mental model and removes unnecessary navigation. As a result, updating status and switching candidates feels fast, natural, and effortless, even during busy screening hours.

BG

Before

After

BG

Before

After

Smart pinning & built-in notes

Smart pinning & built-in notes

After long chats, screening questions moved up, forcing recruiters to scroll. Pressing Enter in notes also triggered AI accidentally, disrupting the flow.


To fix this, screening questions, audio, and AI reports are auto-pinned at the top. I also added built-in notes inside the candidate profile, keeping everything in one place.

This removed constant interruptions and made screening faster and smoother.

After long chats, screening questions moved up, forcing recruiters to scroll. Pressing Enter in notes also triggered AI accidentally, disrupting the flow.


To fix this, screening questions, audio, and AI reports are auto-pinned at the top. I also added built-in notes inside the candidate profile, keeping everything in one place. This removed constant interruptions and made screening faster and smoother.

BG

Before

After

BG

Before

After

Impact

We tracked results through Operations and Sales dashboards.

60% of recruiters shared qualified candidates within 2 days, improving delivery speed.

We tracked results through Operations and Sales dashboards. 60% of recruiters shared qualified candidates within 2 days, improving delivery speed.

Client rejection rates also dropped from 65% to 35%, showing better candidate–role alignment and stronger screening quality.

Client rejection rates also dropped from 65% to 35%, showing better candidate role alignment and stronger screening quality.

This led to faster closures, higher recruiter confidence, and greater client trust.

Design System

Design System

As the product direction became clearer through user testing and iteration, we realized the interface needed consistency before moving into detailed implementation. Due to the short delivery timeline, creating a full-scale design system from scratch was not practical.

Instead, I focused on building a lightweight, implementation-ready design system that could support speed without sacrificing structure.

To accelerate setup, I used Figr Identity to generate the base layout structure of the design system from existing UI patterns. This allowed me to focus effort on refining tokens, accessibility, and scalability rather than rebuilding primitives manually.

After long chats, screening questions moved up, forcing recruiters to scroll. Pressing Enter in notes also triggered AI accidentally, disrupting the flow.


To fix this, screening questions, audio, and AI reports are auto-pinned at the top. I also added built-in notes inside the candidate profile, keeping everything in one place. This removed constant interruptions and made screening faster and smoother.

Token Architecture

After importing the generated structure into Figma, I organized the system using a three-tier token architecture:

• Brand tokens — core color, typography, spacing foundations
• Alias tokens — semantic mapping across components
• Component-level mapped tokens — implementation-ready usage values

I also used Foundation Color Generator to define brand color scales and Specs Plugin to document component usage and developer-ready specifications.

This ensured consistency between visual decisions and engineering implementation.

After long chats, screening questions moved up, forcing recruiters to scroll. Pressing Enter in notes also triggered AI accidentally, disrupting the flow.


To fix this, screening questions, audio, and AI reports are auto-pinned at the top. I also added built-in notes inside the candidate profile, keeping everything in one place. This removed constant interruptions and made screening faster and smoother.

Accessibility Considerations

Accessibility was integrated early during system setup

• Used Funkify Disability Simulator to simulate vision and motor conditions
• Used Figma plugins like Able Plugin and Contrast Plugin to verify contrast and readability

Beyond tooling validation, I ensured:

• reduced unnecessary motion for motion-sensitive users
• simplified interaction patterns for motor accessibility
• consistent layouts and language for cognitive accessibility
• captions and transcripts wherever audio or video content appeared

This helped establish an accessibility-aware foundation before engineering implementation began.

After long chats, screening questions moved up, forcing recruiters to scroll. Pressing Enter in notes also triggered AI accidentally, disrupting the flow.


To fix this, screening questions, audio, and AI reports are auto-pinned at the top. I also added built-in notes inside the candidate profile, keeping everything in one place. This removed constant interruptions and made screening faster and smoother.

About me

+91 9087179138

Email:

jonahimmanuel114@gmail.com

About me

+91 9087179138

Email:

About me

+91 9087179138

Email:

jonahimmanuel114@gmail.com

Before you go…

Back to top

Back to top

Think I could be a good fit?

Let’s discuss how I can contribute to your team

Let’s make an impact. Reach out anytime.

Jonah Immanuel

Product Designer

Contact me

jonahimmanuel114@gmail.com

If you’d like to know more,
happy to connect for a screening call.

+91 9087179138

Don’t be a stranger. Come back anytime.

Copyright © JonahImmanuel, 2026

Before you go…

Back to top

Back to top

Think I could be a good fit?

Let’s discuss how I can

contribute to your team

Let’s make an impact. Reach out anytime.

Jonah Immanuel

Product Designer

Contact me

jonahimmanuel114@gmail.com

If you’d like to know more,
happy to connect for a screening call.

+91 9087179138

Don’t be a stranger. Come back anytime.

Copyright © JonahImmanuel, 2026

Before you go…

Back to top

Back to top

Think I could be a good fit?

Let’s discuss how I can contribute to

your team

Let’s make an impact.
Reach out anytime.

Jonah Immanuel

Product Designer

Contact me

jonahimmanuel114@gmail.com

If you’d like to know more,
happy to connect for a screening call.

+91 9087179138

Don’t be a stranger.
Come back anytime.

Copyright © JonahImmanuel, 2026