Lead/PM for a JS accessibility monitoring engine; explored ML beyond rule‑based checks.


Console tasks are complex; accessibility regressions are easy to miss across services.
AWS generally manually audits for console accessibility; automating this process decreases human time spent on identifying WCAC-designated accessibility errors.
I was originally hired in to AWS for their Machine Learning Solutions Lab - Machine Learning Engineer. After a company-wide restructuring event, I was moved to console accessibility to help them design systems (including ML pipelines) for automating accessibility testing.
I supported my manager as team lead, aligning engineers to build a shared JS monitoring engine and measurement plan, interacted with stakeholders to evaluate user stories. I also created several reports on potential solutions for machine-learning assisted accessibility monitoring.

• Requirements, backlog, and delivery cadence.
• Dashboarding for violations and trends.
• Integration playbooks for service teams.
• ML exploration: signals for heuristic failures, labeling loop, and guardrails.
• Roadmap + eval criteria to trial ML checks safely.