LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
First name
Last name
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Vice President-AI Cognitive Engineer Lead

at J.P. Morgan

Back to all Data Science / AI / ML jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Vice President-AI Cognitive Engineer Lead

at J.P. Morgan

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 18 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Senior AI Cognitive Engineer role focused on designing and engineering multimodal human–AI systems that align with human cognition. You will analyze and model decision-making, attention, workload, and modality-switching across voice, text, visualization and ambient interfaces. Translate cognitive insights into system-level requirements for AI agents, decision support and automation, and run simulations and user-in-the-loop experiments to validate performance. Collaborate with product, design and engineering teams to optimize trust, cognitive fit and human–automation interaction.

Location: Jersey City, NJ, United States

Are you passionate about the intersection of human cognition and artificial intelligence? Join our Transformative AI team and help shape the future of multimodal human–AI systems. In this role, you’ll engineer solutions that make decision-making, information flows, and human–agent interactions more efficient, safe, and intuitive. Be part of a team that is redefining how people and technology work together.

As an AI Cognitive Engineer in the Transformative AI team, you will analyze, model, and design multimodal human–AI systems that align with human cognition. You will ensure that decision-making, information flows, and human–agent interactions are optimized across voice, text, data visualization, and ambient interfaces. Unlike traditional UI/UX design, this role focuses on understanding cognition and human performance in complex environments, then engineering systems that extend and amplify those capabilities.

Job responsibilities:

  • Conduct cognitive task analyses for multimodal workflows (voice, chat, visual dashboards, ambient signals)
  • Translate insights into system-level requirements for AI agents, decision support tools, and automation pipelines
  • Model human workload, attention, and modality-switching costs (e.g., moving between text, charts, and speech)
  • Collaborate with product, design, and engineering teams to ensure multimodal systems reflect cognitive principles
  • Design and evaluate cross-modal decision support (e.g., when should an AI “speak,” “show,” or “stay silent”)
  • Develop frameworks for trust calibration and cognitive fit in multimodal human–AI teaming
  • Run simulations and user-in-the-loop experiments to test system performance across modalities


Required qualifications, capabilities, and skills:

  • Formal training or certification in software engineering concepts and at least 5 years of applied experience
  • Advanced degree in Cognitive Engineering, Human Factors, Applied Cognitive Psychology, Systems Engineering, or related field
  • Proven experience in complex, high-stakes domains
  • Deep expertise in cognitive load and modality management, human error analysis and mitigation, decision-making under uncertainty, human–automation interaction, and voice/visual trust calibration
  • Experience evaluating multimodal AI/ML systems (voice, NLP, data visualization, multimodal agents)

 

Preferred qualifications, capabilities, and skills:

  • Ability to analyze how humans think and decide across voice, text, and visual modalities
  • Skill in translating cognitive principles into engineering requirements for multimodal AI systems
  • Experience ensuring systems work with an understanding of human cognition across all interaction modes
  • Background in designing and testing multimodal systems
Design and optimize multimodal human–AI systems that enhance decision-making and user experience.

Vice President-AI Cognitive Engineer Lead

at J.P. Morgan

Back to all Data Science / AI / ML jobs
J.P. Morgan logo
Bulge Bracket Investment Banks

Vice President-AI Cognitive Engineer Lead

at J.P. Morgan

Mid LevelNo visa sponsorshipData Science/AI/ML

Posted 18 days ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Senior AI Cognitive Engineer role focused on designing and engineering multimodal human–AI systems that align with human cognition. You will analyze and model decision-making, attention, workload, and modality-switching across voice, text, visualization and ambient interfaces. Translate cognitive insights into system-level requirements for AI agents, decision support and automation, and run simulations and user-in-the-loop experiments to validate performance. Collaborate with product, design and engineering teams to optimize trust, cognitive fit and human–automation interaction.

Location: Jersey City, NJ, United States

Are you passionate about the intersection of human cognition and artificial intelligence? Join our Transformative AI team and help shape the future of multimodal human–AI systems. In this role, you’ll engineer solutions that make decision-making, information flows, and human–agent interactions more efficient, safe, and intuitive. Be part of a team that is redefining how people and technology work together.

As an AI Cognitive Engineer in the Transformative AI team, you will analyze, model, and design multimodal human–AI systems that align with human cognition. You will ensure that decision-making, information flows, and human–agent interactions are optimized across voice, text, data visualization, and ambient interfaces. Unlike traditional UI/UX design, this role focuses on understanding cognition and human performance in complex environments, then engineering systems that extend and amplify those capabilities.

Job responsibilities:

  • Conduct cognitive task analyses for multimodal workflows (voice, chat, visual dashboards, ambient signals)
  • Translate insights into system-level requirements for AI agents, decision support tools, and automation pipelines
  • Model human workload, attention, and modality-switching costs (e.g., moving between text, charts, and speech)
  • Collaborate with product, design, and engineering teams to ensure multimodal systems reflect cognitive principles
  • Design and evaluate cross-modal decision support (e.g., when should an AI “speak,” “show,” or “stay silent”)
  • Develop frameworks for trust calibration and cognitive fit in multimodal human–AI teaming
  • Run simulations and user-in-the-loop experiments to test system performance across modalities


Required qualifications, capabilities, and skills:

  • Formal training or certification in software engineering concepts and at least 5 years of applied experience
  • Advanced degree in Cognitive Engineering, Human Factors, Applied Cognitive Psychology, Systems Engineering, or related field
  • Proven experience in complex, high-stakes domains
  • Deep expertise in cognitive load and modality management, human error analysis and mitigation, decision-making under uncertainty, human–automation interaction, and voice/visual trust calibration
  • Experience evaluating multimodal AI/ML systems (voice, NLP, data visualization, multimodal agents)

 

Preferred qualifications, capabilities, and skills:

  • Ability to analyze how humans think and decide across voice, text, and visual modalities
  • Skill in translating cognitive principles into engineering requirements for multimodal AI systems
  • Experience ensuring systems work with an understanding of human cognition across all interaction modes
  • Background in designing and testing multimodal systems
Design and optimize multimodal human–AI systems that enhance decision-making and user experience.