Browse
Employers / Recruiters

Staff Software Engineer, Data Infrastructure

asapp-2 · 30+ days ago
Negotiable
Full-time
Continue
By pressing the button above, you agree to our Terms and Privacy Policy, and agree to receive email job alerts. You can unsubscribe anytime.
Join our team at ASAPP, where we're developing transformative Vertical AI designed to improve customer experience. Recognized by Forbes AI 50, ASAPP designs generative AI solutions that transform the customer engagement practices of Fortune 500 companies. With our automation and simplified work processes, we empower people to reach their full potential and create exceptional experiences for everyone involved. Work with our team of talented researchers, engineers, scientists, and specialists to help solve some of the biggest and most complex problems the world is facing.

The Data Engineering team at ASAPP designs, builds and maintains our mission-critical core data infrastructure and analytics platform. Accurate, easy-to-access, and secure data is critical to our natural language processing (NLP) customer interaction platform which interacts with tens of millions of end-users in real-time.

We’re looking to hire a Staff Data Engineer with the knack for building out data infrastructure systems that can handle our ever-growing volumes of data and the demands we want to make of it.  Automation is a key part of our workflow, so you’ll help design and build highly-available data processing pipelines that self-monitor and report anomalies. You’ll need to be an expert in ETL processes and know the in’s and out’s of various data stores that serve data rapidly and securely to all internal and external stakeholders. As part of our fast-growing data engineering team, this role will also play an integral role in shaping the future of data infrastructure as it applies to improving our existing metric-driven development and machine learning capabilities.

Applicants with all or some relevant combination of the requirements listed below are encouraged to apply. We are able to consider remote and hybrid candidates for this role.

What you'll do

  • Design and deploy improvements to our mission-critical production data pipeline, data warehouses, data systems
  • Recognize data flow patterns and generalizations to automate as much as possible to drive productivity gains
  • Expand our logging and monitoring processes to discover and resolve anomalies and issues before they become problems
  • Develop state-of-the-art automation and data solutions in Python, Spark and Flink
  • Maintain, Manage, Monitor our infrastructure related including Kafka, Kubernetes, Spark, Flink, Jenkins, general OLAP and RDBMS databases, S3 objects buckets, permissions
  • Increase the efficiency, accuracy, and repeatability of our ETL processes
  • Know how to make the tradeoffs required to ship without compromising quality

What you'll need

  • 12+ years of experience in general software development and/or dev-ops, sre roles in AWS.
  • 5+ years experience in data engineering, data systems, pipeline and stream processing.
  • Expertise in at least one flavor of SQL, e.g. Redshift, Postgres, MySQL, Presto/Trino, Spark SQL, Hive
  • Proficiency in a high-level programming language(s). We use Python, Scala, Java, Kotlin, and Go
  • Experience with CI/CD (continuous integration and deployment)
  • Experience with workflow management systems such as Airflow, Oozie, Luigi, and Azkaban
  • Experience implementing data governance, i.e. access management policies, data retention, IAM, etc.
  • Confidence operating in a devops-like capacity working with AWS, Kubernetes, Jenkins, Terraform, etc. thinking about automation, alerting, monitoring, and security and other declarative infrastructure

What we'd like to see

  • Bachelor's Degree in a field of science, technology, engineering, or math, or equivalent hands-on experience
  • Experience in maintaining and managing kafka (not just using)
  • Experience in maintaining and managing OLAP/HA database systems (not just using)
  • Familiarity handling Kubernetes clusters for various jobs, apps, and high throughput
  • Technical knowledge of data exchange and serialization formats such as Protobuf, Avro, or Thrift
  • Experience in either deploying and creating Spark Scala and/or Flink applications.

ASAPP is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, disability, age, or veteran status. If you have a disability and need assistance with our employment application process, please email us at careers@asapp.com to obtain assistance. #LI-AG1 #LI-Remote

Last updated on Sep 17, 2024

See more

About the company

More jobs at asapp-2

Analyzing

Buenos Aires, Buenos Aires

 · 

30+ days ago

 · 

30+ days ago

Bengaluru, Karnataka

 · 

30+ days ago

Developed by Blake and Linh in the US and Vietnam.
We're interested in hearing what you like and don't like! Live chat with our founder or join our Discord
Changelog
🚀 LaunchpadNov 27
Create a site and sell services based on your resume.
🔥 Job search dashboardNov 13
Revamped job search UI with a sortable grid, live filtering, bookmarks, and application tracking.
🫡 Cover letter instructionsSep 27
New Studio settings give you control over AI output.
✨ Cover Letter StudioAug 9
Automatically generate cover letters for any job.
🎯 Suggested filtersAug 6
Copilot suggests additional filters above the results.
⚡️ Quick applicationsAug 2
Apply to jobs using info from your resume. Initial coverage of ~200k jobs in Spain, Germany, Austria, Switzerland, France, and the Netherlands.
🧠 Job AnalysisJul 12
Have Copilot read job descriptions and extract out key info you want to know. Click "Analyze All" to try it out. Click on the Copilot's gear icon to customize the prompt.
© 2024 RemoteAmbitionAffiliate · Privacy · Terms · Sitemap · Status