LOG IN
SIGN UP
Tech Job Finder - Find Software, Technology Sales and Product Manager Jobs.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Tech Job Finder
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Credit Data Engineer

at Klarna

Back to all Data Engineering jobs
Klarna logo
FinTech

Credit Data Engineer

at Klarna

Mid LevelNo visa sponsorshipData Engineering

Posted 10 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Build and operate mission-critical data products powering underwriting and credit decisioning by owning global underwriting tables and ensuring freshness, completeness, accuracy, and lineage. Design agent- and human-friendly schemas and machine-readable data contracts, implement batch and streaming pipelines, and drive observability, incident reviews, and cross-functional delivery with credit, modeling, policy, finance, and treasury teams.

What you’ll do

  • Own the global UW tables (canonical facts/dimensions for applications, decisions, features, repayments, delinquency) with clear SLAs for freshness, completeness, accuracy, and data lineage.

  • Design for AI-agents and humans: consistent IDs, canonical events, explicit metric definitions, rich metadata (schemas, data dictionaries), and machine-readable data contracts.

  • Build & run pipelines (batch + streaming) that feed UW scoring, real-time decisioning, monitoring, and underwriting optimization.

  • Instrument quality & observability (alerts, audits, reconciliation, backfills) and drive incident/root-cause reviews.

  • Partner closely with Credit Portfolio Management, Policy teams, Modeling teams, and treasury and finance teams to land features for RUE and consumer-centric models, plus regulatory and management reporting.

Tech stack (what we use)

  • Languages: SQL, PySpark, Python

  • Frameworks: Apache Airflow, AWS Glue, Kafka, Redshift

  • Cloud & DevOps: AWS (S3, Lambda, CloudWatch, SNS/SQS, Kinesis), Terraform; Git; CI/CD

What you’ll bring

  • Proven ownership of mission-critical data products (batch + streaming).

  • Data modeling, schema evolution, data contracts, and strong observability chops.

  • Familiarity with AI/agent patterns (agent-friendly schemas/endpoints, embeddings/vector search).

Credit Data Engineer

at Klarna

Back to all Data Engineering jobs
Klarna logo
FinTech

Credit Data Engineer

at Klarna

Mid LevelNo visa sponsorshipData Engineering

Posted 10 hours ago

No clicks

Compensation
Not specified

Currency: Not specified

City
Not specified
Country
Not specified

Build and operate mission-critical data products powering underwriting and credit decisioning by owning global underwriting tables and ensuring freshness, completeness, accuracy, and lineage. Design agent- and human-friendly schemas and machine-readable data contracts, implement batch and streaming pipelines, and drive observability, incident reviews, and cross-functional delivery with credit, modeling, policy, finance, and treasury teams.

What you’ll do

  • Own the global UW tables (canonical facts/dimensions for applications, decisions, features, repayments, delinquency) with clear SLAs for freshness, completeness, accuracy, and data lineage.

  • Design for AI-agents and humans: consistent IDs, canonical events, explicit metric definitions, rich metadata (schemas, data dictionaries), and machine-readable data contracts.

  • Build & run pipelines (batch + streaming) that feed UW scoring, real-time decisioning, monitoring, and underwriting optimization.

  • Instrument quality & observability (alerts, audits, reconciliation, backfills) and drive incident/root-cause reviews.

  • Partner closely with Credit Portfolio Management, Policy teams, Modeling teams, and treasury and finance teams to land features for RUE and consumer-centric models, plus regulatory and management reporting.

Tech stack (what we use)

  • Languages: SQL, PySpark, Python

  • Frameworks: Apache Airflow, AWS Glue, Kafka, Redshift

  • Cloud & DevOps: AWS (S3, Lambda, CloudWatch, SNS/SQS, Kinesis), Terraform; Git; CI/CD

What you’ll bring

  • Proven ownership of mission-critical data products (batch + streaming).

  • Data modeling, schema evolution, data contracts, and strong observability chops.

  • Familiarity with AI/agent patterns (agent-friendly schemas/endpoints, embeddings/vector search).

SIMILAR OPPORTUNITIES

No similar jobs available at the moment.