
Software Engineer III - Data Engineer, Spark, Databricks, SQL
at J.P. Morgan
Posted 18 hours ago
No clicks
- Compensation
- Not specified
- City
- Plano
- Country
- United States
Currency: Not specified
Seeking a creative Software Engineer to design, develop, and troubleshoot cloud data platform solutions. The role involves writing secure production code in Python or Java and leveraging Apache Spark, Databricks, and SQL for large-scale data processing. Responsibilities include data analysis, visualization, automation of recurring issues, and contributing to architecture and best practices across engineering teams. Experience with AWS, Snowflake, Unix, and tooling like Git, Jenkins, and CI/CD is expected.
Location: Plano, TX, United States
Job responsibilities
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Develops secure high-quality production code, and reviews and debugs code written by others
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
- Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
- Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies
- Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on Software Engineering concepts and 3+ years applied experience.
- Proven hands-on experience in Python or Java development on data platforms, including practical expertise with Apache Spark for large-scale data processing.
- Strong proficiency in SQL for data querying, transformation, and analysis.
- Experience with cloud data platform technology stacks such as AWS S3, Snowflake, and Databricks, with hands-on expertise in Spark, SQL, Unix, AWS, and data modeling.
- Knowledge of application, data, and infrastructure architecture disciplines.
- Knowledge of GIT, bitbucket, Maven, Jenkins, JIRA, Control-M or equivalent tools.
Demonstrated knowledge of software applications and technical processes within a technical discipline. (e.g. cloud, artificial intelligence, machine learning, etc.)
Preferred qualifications, capabilities, and skills
- Working experience on Kafka is a nice to have skill.
- Experience in Java 8/11/17, Object-Oriented design & analysis is a plus.
- Experience with Unix, shell scripts, is a plus.





