Skip to content
AI Engineering LeaderProduction LLM Systems • Team Builder

Jay Ozer

I'm an AI Engineering Leader who builds production systems that elevate humanity—not just optimize metrics. From intelligent deed parsing that cut processing time by 99% to autonomous document platforms serving thousands, I architect AI solutions that make complex work simple and teams more effective. But the code is never the endpoint—it's always in service of people: dramatically reducing the cost and complexity of the mortgage process for families, clearer answers for worried parents, better tools for overloaded professionals.

What drives me? Working alongside world-class engineers in environments where cutting-edge technology meets real-world impact. From shaping 5-year data strategy at the Federal Reserve and contributing to monetary policy research, to guiding teams through IPOs at fast-paced startups and scaling engineering pods from 0→6, I've shipped 14+ production AI systems across regulated industries. My approach blends deep technical capability—RAG, multi-agent architectures, fine-tuning—with strategic leadership, always focused on shipping pragmatic solutions that make teams more effective.

I believe transformative AI requires both boundless imagination and deep responsibility. I'm here to build safe, reliable systems that ship at scale—and to learn from the brilliant minds pushing what's possible.

Download Resume
Portrait of Jay Ozer
Featured leaderJay Ozer — building pragmatic, ethical AI experiences.
AI Systems Shipped
14+

Production LLM systems at scale

Time Reduction
99%

Intelligent deed parsing efficiency

Teams Scaled
6

Cross-functional engineering pods

Patents Pending
2

AI document processing & retrieval

Leadership

Operating at the intersection of AI vision and delivery discipline

I blend product instincts with deep technical empathy—building pods that move fast, measure outcomes, and honor responsible AI guardrails.

Cost reduction per order
27%

Average operational cost savings across AI-powered workflows, combining intelligent automation with patent-protected decisioning.

FTE capacity freed
12

Full-time equivalent roles redirected from manual processes to strategic work through AI automation and augmentation.

Core workflows automated
8+

Mission-critical processes transformed with AI—from document cognition to underwriting decisions to data pipelines.

AI Operating System

I build the scaffolding before the model. My operating system covers discovery, data audits, evaluation criteria, human-in-the-loop workflows, and governance—ensuring every AI launch is production-ready, responsible, and adopted by the teams who depend on it.

People-first scaling

I thrive at the intersection of technical depth and people development. I create frameworks where senior ICs stay empowered to do deep work while rising leaders get the coaching, feedback loops, and stakeholder exposure they need to transition confidently into management.

Proof through KPIs

I ship AI that delivers measurable outcomes. Before any model hits production, I define the metrics that matter with executives, instrument the UX for observability, and build dashboards that show adoption, performance, and business impact—not just accuracy scores.

AI Operating System

A pragmatic, auditable path from vision to adoption

My frameworks balance applied research, product discovery, and change management—so teams can ship AI that teams trust and executives can measure.

  1. Step 1

    Vision & alignment

    Define the why

    1

    I partner with executives and domain experts to map strategic intent, success metrics, and risk boundaries before anyone writes a line of code.

    Key outputs

    • North-star narrative with measurable outcomes
    • Stakeholder map and governance cadence
    • Experiment backlog prioritized by impact vs. effort
  2. Step 2

    Data readiness

    Stabilize the foundation

    2

    I audit data sources, semantic layers, and real-time needs—then close the gaps so models aren't built on assumptions and wishful thinking.

    Key outputs

    • Source of truth inventory with ownership
    • Quality and bias assessments
    • Pipelines instrumented for lineage and observability
  3. Step 3

    Modeling & prototyping

    Prove the insights

    3

    I combine proprietary signals with foundation models, benchmark feasibility fast, and set evaluation criteria with stakeholders—not after the fact.

    Key outputs

    • Evaluation matrix with acceptance thresholds
    • RAG/playground environment for SMEs
    • Guardrail and safety checklist
  4. Step 4

    Product & adoption

    Ship the experience

    4

    I design human-in-the-loop workflows, instrument the UX for observability, and launch pilot cohorts with enablement for every persona who'll touch the system.

    Key outputs

    • Pilot rollout plan with training
    • Instrumentation + KPI dashboard
    • Feedback loops and iteration rituals
  5. Step 5

    Governance & scaling

    Keep the flywheel spinning

    5

    I codify processes, playbooks, and wins so the next AI initiative moves faster—turning one-off experiments into repeatable muscle memory.

    Key outputs

    • Policy updates and audit trail
    • Reusable templates & asset library
    • Quarterly AI business review cadence
Innovation

Patents, Research & Thought Leadership

These are the systems I've built that push the boundaries of what's possible with AI. From patent-pending multi-agent architectures to production-grade document intelligence platforms, each project represents months of experimentation, iteration, and real-world validation.

Patent Pending
Patent • 19/080655

DeedIQ - Intelligent Deed Analysis System

Document ProcessingFiled March 14, 2025

DeedIQ was born from a looming crisis: our critical dependency for core data services had a contract termination approaching, and the existing processes were failing us. We were consistently missing SLAs, forcing our operations team to manually intervene—a nightmare task considering deeds aren't clean machine-generated documents, they're scanned PDF images that are notoriously difficult to parse. Even worse, we had no way to track accuracy metrics, and processing delays were impacting our delivery times. The stakes were high: when I dug into our Nexus claims data, I found over 800 vesting and legal-related incidents between 2021 and 2024, costing us more than $2M. We needed a complete overhaul. DeedIQ solved this with a sophisticated multi-agent AI system using 10+ specialized LLM agents working in parallel. The system performs dual-team extraction with cross-validation, county-specific pattern recognition, and self-learning capabilities. Combining knowledgebase, neural search, and human-in-the-loop validation, we achieved 97%+ accuracy in testing with thousands of files (up from 82% with third-party providers), and since releasing into production, we've maintained a 100% accuracy rate with zero errors to date, while reducing processing time by 99%—from 30 minutes to 4 hours down to seconds per deed.

Accuracy

97%+

Tested on thousands of files, zero production errors

AI Agents

10+

Specialized agents for parallel processing

Time Reduction

99%

From 30min-4hrs down to seconds per deed

Cost Savings

$12 → $0.20

Per deed processing cost reduction

Multi-Agent AIKnowledgebaseNeural SearchDocument IntelligencecrewAISelf-LearningHuman-in-the-Loop
Patent Pending
Patent • TBD

ClerkIQ - Automated County Document Retrieval

Intelligent Document Retrieval SystemTo be filed Q1 2026

ClerkIQ started as a gap-filler but evolved into something much bigger. We needed to supplement mortgage files required for closing orders, but our existing data providers didn't have coverage in all counties—leaving us with critical blind spots. The manual alternative was painful: our ops team would spend hours navigating disparate county clerk websites, each with its own quirky interface and authentication requirements. Or we'd send the document request to an external search firm that could charge up to $75 per document. This administrative burden meant our title officers were stuck doing clerical work instead of the decision-making tasks they were actually trained for. We initially built ClerkIQ to address those missing counties, but it worked so well that it became our primary method for obtaining all records—not just a backup. The system combines browser automation with intelligent navigation to handle the variability across hundreds of county websites, supporting multiple search inputs like instrument numbers, volume/page references, and grantor/grantee names. Built with FastAPI and PostgreSQL for asynchronous job processing, ClerkIQ performs automated authentication, includes a validation layer to verify downloaded documents match search criteria, and enables 24/7 automated operations with intelligent retry logic. The modular design scales to handle multiple counties simultaneously—we started with 7 and are expanding to 200+. The impact has been dramatic: document retrieval costs dropped from $40 to $0.50 per document (labor + materials), with projected annual savings of $2.3M and 11 FTE at full deployment. This aligns perfectly with Doma Title's mission to lower the cost of homeownership through our partnership with Fannie Mae—ClerkIQ is a key part of making that vision a reality. Our ops team can finally focus on what matters: making decisions, not chasing paperwork.

Cost Reduction

$40 → $0.50

Per document (labor + material costs)

Annual Savings

$2.3M + 11 FTE

Projected at full deployment

Coverage

7 → 200+

Counties (current → planned expansion)

Availability

24/7

Production system with automated validation

Browser AutomationFastAPIDocument IntelligenceReal-time ProcessingProduction System
Projects

Featured Projects & Side Builds

Production AI systems that broke industry monopolies, weekend hacks that solved real problems, and everything in between. Hands-on code, real metrics, zero fluff.

Jay IQ

Chat with an AI trained on Jay’s experience, projects, and achievements.

I wanted to create a fun website that showcased my chatbot building skills—because honestly, asking questions is way more engaging than scrolling through static text. Jay IQ lets you chat with an AI trained on my experience, projects, and achievements, with answers that cite the relevant sections to keep everything grounded. But the real magic is the job matching tool: paste any job URL and see how my background aligns with the role. Anyone who's tried to parse LinkedIn knows they don't allow scraping, so I had to get creative. I open-sourced the code with a self-learning loop where AI writes structured parsing methods that can be updated with a single click whenever LinkedIn changes their site structure. I built this as a fun experiment while creating the portfolio, but now I actually use the 'How does Jay fit this role?' feature for jobs I'm interested in—it helps me identify gaps and figure out what to learn next.

What to expect

  • Job matching: Paste any job URL to see how Jay's experience aligns with the role requirements.
  • Grounded answers: Pulls context from hero, experience, operating system, and roadmap sections.
  • Multi-site parsing: Analyzes jobs from LinkedIn, Indeed, and Glassdoor with intelligent routing and self-learning scrapers.
Healthcare

PoppyNote - SOAP Note Transcription

HIPAA-ready automatic transcription system that generates structured SOAP notes for dental professionals, reducing after-visit documentation time.

PoppyNote was born from a problem hitting close to home: my wife Andrea is a pediatric dentist, and she'd routinely stay late at the office finishing up her clinical notes after a long day with patients. It was the administrative task she complained about most. I searched for tools we could buy, but the cost didn't justify the value—plus, everything on the market was built for all health professionals, not specifically for dentistry. I wanted something dental-specific that could be fine-tuned with custom instructions for pediatric workflows. So I built PoppyNote with HIPAA-ready AWS infrastructure (S3, Lambda, CloudWatch with encryption at rest). The system records patient visits and automatically converts them into structured SOAP notes. But I didn't stop there—I built a full integration with Curve, the dental management software Andrea uses. PoppyNote pulls today's patient list via Curve's API, creates SOAP notes from the visit recordings, and automatically attaches them back to the notes section in Curve. Now Andrea doesn't have to stay late writing notes, and the documentation captures everything mentioned during the visit without her having to remember details hours later. The real game-changer is the chat function: using Pinecone Assistant with a RAG approach, clinicians can chat with prior SOAP notes before a visit—either getting a summary or asking specific questions about a patient's history. We're now expanding beyond our practice to serve all dental professionals, starting with orthodontists, and I'm actively working on integrations for other dental management software platforms.

Deployment Time

3 min

CloudFormation stack creation

Processing

Real-time

S3 event-triggered transcription

Patient Volume

1,362

Visits managed in year 2

Compliance

HIPAA-ready

AWS eligible services with encryption

Healthcare AITranscriptionAWSHIPAARAGDocument IntelligenceCloudFormationPinecone
Open Source

MCP jozer - Personal Info Server

Model Context Protocol (MCP) server exposing 14 standardized tools for personal information retrieval, built with FastMCP framework.

I built this because why not. Everything was MCP this, MCP that—so I thought, what if I had my own MCP server that could do tool calls to answer questions about me? MCP jozer is exactly that: a Model Context Protocol server that provides comprehensive personal information through 14 standardized tools accessible to any MCP-compatible client. Built with FastMCP, the server exposes tools for biography, professional experience, skills, education, projects, contact information, and CV metadata. This implementation demonstrates how to build personal API endpoints following the emerging MCP standard for AI agent integration. The server can be deployed to FastMCP Cloud or self-hosted, enabling AI assistants like Claude Desktop to access structured personal data through a unified protocol. It's open source, took me under an hour to build (as documented in my Medium post), and honestly, it's just fun to have your own AI-accessible API for yourself.

Tools

14

Standardized information endpoints

Protocol

MCP

Model Context Protocol compliant

Framework

FastMCP

Modern Python MCP framework

Version

0.1.0

Initial release

Model Context ProtocolMCPFastMCPPythonAI AgentsPersonal APIClaude DesktopOpen Source
Work

IntentIQ - Intelligent Email Triage & Categorization

Production email automation system that triages and categorizes 14K+ emails per day for operations teams. Multi-stage AI pipeline handles actionable classification and intelligent routing into 7 distinct title insurance workflow categories.

IntentIQ was born from an operational crisis when interest rate drops triggered a massive surge in order volume, overwhelming the operations team with 14,000+ emails daily. The system implements a sophisticated two-stage AI pipeline: first, a triage model classifies emails as actionable or unactionable (filtering spam and FYI messages), then actionable emails flow through a category model that routes them into 7 distinct workflow categories including title & curative, claims, loan amount updates, and scheduling. Built with a vendored email parser that extracts the latest message from email threads plus attachments, IntentIQ provides both SDK and CLI interfaces for flexible integration. The platform includes automated acknowledgement responses for specific email types, which spawned the creation of the Agentbuilder Outlook MCP server for seamless Outlook integration.

Email Volume

14K+/day

Peak operational load during interest rate crisis

Workflow Categories

7

Distinct title insurance workflow classifications

AI Stages

2-Stage Pipeline

Triage classification + category routing

Automation

Auto-Response

Automated acknowledgements for specific email types

Agents SDKEmail AutomationMulti-AgentProduction SystemWorkflow Orchestration
Healthcare

Poppy RAG on Slack

Slack-integrated RAG chatbot providing employees instant access to handbook information and web-grounded answers.

I'm always looking for ways to make life easier at my wife's pediatric dentistry. We've built a bunch of cool tools—an automated patient intake form, the Ask Poppy chatbot, and a few other nifty things that smooth out daily operations. But we didn't have an employee-first app. We have this thick employee handbook packed with all the information employees need to know, but when someone has a question like 'How does accrued holiday time work?' they either have to ask Andrea or dig through pages of documentation. Poppy RAG on Slack solved this problem. Since we already use Slack for office communication, I built it as a Slack app that makes onboarding new employees so much easier and gives everyone instant answers to their exact questions without endless searching. The system uses RAG to query the employee handbook and retrieve relevant information, and I built in a web search tool in case they need information on California employment laws or workplace regulations. It's used pretty much every day, and the team loves how quick and easy it is to get answers right where they're already working.

Platform

Slack

Native integration

Sources

2

Handbook + Web search

Users

Team-wide

All employees

Response Time

<5 sec

Average query response

RAGSlack BotEmployee ToolsKnowledge BaseWeb Search
Healthcare

Ask Poppy

Fine-tuned GPT-4o-mini chatbot that reduced on-call weekend stress for my wife's pediatric dentistry by handling routine parent inquiries.

Built for my wife Andrea's pediatric dentistry after seeing her overwhelmed during on-call weekends. Most calls weren't emergencies—just worried parents seeking reassurance. I created this chatbot as the first line of defense, trained on 750+ Q&A pairs verified by two other pediatric dentists. Uses a fine-tuned GPT-4o-mini model integrated via custom Voiceflow functions, including GuardGroq (LLaMA Guard-powered guardrails) and KnowledgeFlow (contextual retrieval inspired by Anthropic's approach). Presented at a Voiceflow developer meetup, KnowledgeFlow became popular for improving RAG accuracy. Result: Andrea now only gets emergency calls on weekends, making on-call shifts manageable. I am revamping this with a much more MCP forward version where patients can not only chat by text but also by voice as well as get details of their appointment and change their schedules or book a new appointment.

Training Data

750+

Verified Q&A pairs

Model

GPT-4o-mini

Fine-tuned for pediatric dentistry

Availability

24/7

Always accessible

Integration

Live

Poppy Kids Pediatric Dentistry

Fine-tuningVoiceflowHealthcareRAGLLaMA Guard
Tool

POPMedia - Privacy-First Media Optimizer

Browser-based video compression and image optimization tool that keeps your files completely private—no uploads, no servers, all processing happens locally.

Built out of frustration with uploading private photos to remote sites for compression, POPMedia is a privacy-first media optimization tool where videos never leave your browser. Using FFmpeg.wasm running as WebAssembly, all video compression happens entirely client-side with no server uploads or cloud processing. Images are optimized server-side using Sharp for lightning-fast WebP conversion with up to 80% size reduction. The project served as a testing ground for Vercel as the go-to deployment platform for modern frontend applications, demonstrating how to build fast, privacy-focused web tools. Achieves 50-90% video file size reduction while maintaining quality, with batch processing support for multiple files simultaneously.

Privacy

100%

Videos never leave your browser

Compression

50-90%

File size reduction maintained quality

Processing

Client-Side

WebAssembly FFmpeg in browser

Deployment

Vercel

Testing modern frontend platform

PrivacyFFmpeg.wasmWebAssemblyNext.js 15React 19SharpVercelWebPVideo CompressionImage Optimization
Open Source

Community Contributions & Open Tools

MCP servers, bug fixes, Voiceflow tools, and community contributions. Building in public, sharing what works, and giving back to the ecosystems that made my work possible.

GitHub Contributions

View Profile →
← Scroll to see full activity →
jayozer's GitHub contribution chart
Contribution

browser-use

Fixed critical duplicate download event bug in browser-use, improving download handling stability for both local and remote sessions.

Contributed a comprehensive fix to browser-use that resolves duplicate local download events by ensuring each download is dispatched once. The solution hardens download handling across CDP progress, JavaScript fetch, and polling paths. Added a 'handled' guard to deduplicate dispatch events, disabled JS fetch fallback for local downloads to prevent double-dispatch, and improved remote handling by emitting completion using suggestedFilename when filePath is missing. Moved CDP handlers outside try blocks, implemented handler task tracking, and added proper cleanup on completion for enhanced stability.

Impact

Bug Fix

Duplicate download events resolved

PR Status

Merged

Successfully merged contribution

Scope

Core

Download handling stability

Type

Enhancement

Local & remote sessions

PythonBrowser AutomationBug FixCDPDownload Handling
Tool

KnowledgeFlow

Standalone Streamlit app that streamlines document uploads into Voiceflow knowledge bases to dramatically improve retrieval accuracy.

KnowledgeFlow is a popular standalone Streamlit application that makes it easy to convert documents and tables for better RAG accuracy in Voiceflow. Inspired by Anthropic's contextual retrieval approach, it automatically processes and optimizes documents before uploading them to knowledge bases. Presented at a Voiceflow developer meetup, this tool has become widely adopted by the Voiceflow community. Notably, the innovative edit-before-upload feature—allowing users to review and modify documents before ingestion—was later adopted as an official Voiceflow feature. The app handles document chunking, metadata extraction, and contextual enhancement to ensure chatbots retrieve the most relevant information.

Community

Popular

Widely adopted by Voiceflow users

Presentation

Meetup

Featured at Voiceflow developer event

Innovation

Adopted

Edit feature became Voiceflow feature

Platform

Streamlit

Standalone web application

VoiceflowRAGKnowledge BaseDocument ProcessingContextual RetrievalStreamlitFirecrawl
MCP Server

Agentbuilder Outlook MCP Server

Production-ready read & write MCP server built for OpenAI's AgentBuilder platform, filling critical gaps in the existing read-only Outlook connector.

Built specifically for use with OpenAI's AgentBuilder platform to address limitations in the existing Outlook MCP server. The original AgentBuilder Outlook connector was read-only and only supported Graph authentication—not suitable for production environments. For the IntentIQ project, write access was essential for automated email responses and workflow routing. This comprehensive MCP server provides full read & write operations with dual authentication support: both Microsoft Graph API and organizational account credentials. Supports personal Microsoft accounts (@outlook.com) and organizational accounts (Microsoft 365) with multi-tenant architecture where users provide their own credentials. Handles token acquisition, payload validation, and complete email operations including send, reply, forward, delete, and folder management. Deployed on FastMCP Cloud for immediate use by MCP-compatible clients like Claude Desktop and Cursor. Features delegated permissions for personal accounts via Graph Explorer tokens and client credentials flow for organizational accounts with Azure AD app registration.

Capability

Read & Write

Full CRUD email operations

Auth Methods

Dual

Graph API & Organizational accounts

Platform

AgentBuilder

OpenAI's agent platform

Status

Production

Powering IntentIQ automation

MCPFastMCPOpenAI AgentBuilderMicrosoft GraphOutlookEmail AutomationAzure ADPythonRead & WriteMulti-TenantOAuth
Library

GuardGroq

High-speed LLM guardrail function for Voiceflow using Groq's LLaMA Guard implementation for content safety.

GuardGroq integrates Groq's ultra-fast LLaMA Guard model into Voiceflow to provide real-time content moderation and safety checks. The function acts as a protective layer for conversational AI, filtering inappropriate content and ensuring safe interactions. By leveraging Groq's optimized inference, GuardGroq delivers guardrail checks with minimal latency impact on conversational flows.

Model

LLaMA Guard

Groq implementation

Speed

Ultra-fast

Minimal latency impact

Purpose

Safety

Content moderation

Integration

Voiceflow

Custom function

VoiceflowLLaMA GuardGroqContent SafetyGuardrails
Tool

Multi-Site Job Scraper

AI-powered job scraper with discovery, generation, and reuse modes for LinkedIn, Indeed, and Glassdoor—achieving 99% API cost savings after initial scrape.

A sophisticated job scraping system with two complementary approaches: a direct Playwright scraper and an AI-powered parser that creates reusable, site-specific scrapers. The AI-powered mode features a three-phase workflow: Discovery Mode analyzes job pages and documents scraping strategies, Generation Mode creates validated standalone scripts, and Reuse Mode enables unlimited scraping with zero API calls. The system automatically detects job sites, handles query parameters, and adapts to structure changes. After the first scrape of a site, all subsequent jobs use the generated script with 0 API calls—a 99% cost savings. Includes production-ready URL normalization and chatbot integration utilities.

Cost Savings

99%

After first scrape per site

Success Rate

90%+

Through validation loop

Sites Supported

3+

LinkedIn, Indeed, Glassdoor

API Calls

0

For all jobs after first scrape

PlaywrightAIWeb ScrapingJob SearchAnthropic ClaudeURL ParsingChatbot Integration
MCP Server

Outscraper MCP

Model Context Protocol server for Outscraper integration, enabling AI assistants to access web scraping capabilities.

Outscraper MCP is a Model Context Protocol server that bridges AI assistants with Outscraper's powerful web scraping API. This integration allows Claude and other MCP-compatible AI tools to extract structured data from the web, including Google Maps reviews, business information, and search results. Created to support the development of 'Flaky Reviews'—an app that scans Google Reviews to detect AI-removed reviews, checks compliance with Google's policies, and automatically generates restoration requests. Currently being tested on Poppy Kids Pediatric Dentistry reviews before public release. Featured on Smithery, MsseeP.ai, and Glama MCP server directories.

Protocol

MCP

Model Context Protocol

Platforms

3

Smithery, MsseeP, Glama

Integration

Outscraper

Web scraping API

Use Case

Data Collection

AI-powered scraping

MCPWeb ScrapingOutscraperAPI IntegrationData Collection
Career journey

A decade of building AI and data platforms that ship

From the Federal Reserve to breaking title insurance monopolies—each role added new layers of product thinking, technical chops, and team-building muscle. Click through for the full story.

March 2023 – Present

Senior Manager, Applied AI

Doma Technology LLC

Led applied AI engineering pod delivering production agentic AI systems for title insurance automation, partnering with Fannie Mae and Blend to transform real estate workflows at enterprise scale.

July 2021 – March 2023

Manager, Data Engineering

Doma Technology LLC

Built and led high-performing data team of 6 engineers, establishing engineering excellence and modern data practices that scaled company-wide analytics.

January 2020 – July 2021

Staff Data Analyst

Doma (fka States Title)

Transformed financial services data infrastructure by establishing unified source of truth and driving company-wide adoption of modern analytics tools for post-merger integration.

August 2013 – January 2020

Senior Data Analyst

Federal Reserve Bank of San Francisco

Led data engineering and business intelligence infrastructure development processing terabytes of economic data for data-driven monetary policy decisions in highly regulated central banking environment.

May 2017 – April 2018

Project Management Consultant

Project Management Institute

Served as a global subject matter expert to develop training programs, contribute to PMP/PMI-ACP certification exams, and co-author the PMCD Framework.

Education
  • MS, Data Science

    Regis University

  • MBA, Business Administration

    San Francisco State University

  • MS, Industrial Engineering

    University of Central Florida

  • BEng, Chemical Engineering

    Newcastle University

AI/ML Credentials
  • Teaching Assistant – AI Engineering Bootcamp Cohort 6, AI Makerspace
  • AI Engineering Bootcamp – AI Makerspace
  • Advanced LLM Application Building & Fine-Tuning LLMs – Maven
  • PMI Agile Certified Practitioner (PMI-ACP)
Let's build

Ready to discuss AI leadership, team design, or your next product sprint?

Jay IQ

Jay Ozer

Hi! I'm Jay IQ, trained on Jay's work. Ask me anything—I promise I'm more fun than a typical resume.

Ask me about: