Data Engineer
hlpy, born in 2020, is a tech-driven roadside assistance insurance start-up. What started as a bold idea in Italy quickly expanded to France and Spain, and as of September 2024, we are proud to announce our entry into Germany This marks another milestone in our journey to becoming the go-to solution for modern roadside assistance across Europe— and beyond. We operate on a B2B2C model, partnering with key players in the automotive world—insurance companies, car manufacturers, rental agencies, and mobility providers. Through our advanced digital platform, their customers can instantly request roadside assistance and replacement solutions in real-time, transforming the way help is delivered on the road. Do you dream in SQL, speak fluent Python, and have a love affair with data? We're looking for a Data Engineer who knows their way around databases and ETL pipelines. As a member of the Data & AI team, you'll help promote and expand a data-centric culture within the company. Your secret weapons? An insatiable curiosity, boundless creativity, and a knack for spotting patterns faster than Rick can invent a new gadget during an interdimensional crisis (hopefully with fewer explosions). What You'll Do (Besides Drinking Coffee & Debugging): Analyze, Prepare, and Organize Data: You'll be the Jake and Elwood of pipelines. Build Data Systems and Pipelines: You'll be engineering data pipelines with precision and scalability. Evaluate Business Needs and Objectives: Translate vague business requests into actionable, data-driven solutions. Explore Ways to Enhance Data Quality and Reliability: Think of yourself as the Gandalf of data: "you shall not pass" unless it's clean and reliable. Develop and Maintain BI and Analytics Tools: You'll deliver insights that help the team become the real decision-makers of the office. Collaborate with Other Teams on Several Projects: You'll bridge gaps, solve problems, and bring logic to chaotic data situations. Requirements: 2 years of experience as a Data/Analytics Engineer or in a similar role. Experience in building and maintaining ETL & ELT data pipelines and managing workflow managers (i. e. : Airflow, Dagster, etc. ). Hands-on experience with SQL and NoSQL databases (Postgres, Elasticsearch preferred) as well as database design and query optimization. Proficiency in one or more programming languages (Python preferred). Knowledge of software development and architecture best practices. Familiarity with deploying REST micro-services with CI/CD in containerized environments. Familiarity with cloud providers (AWS preferred). Strong communication and presentation skills. Ability to manage multiple projects and prioritize tasks effectively in a fast-paced environment. Bonus Points If You: Have experience in DS/ML projects. Master concepts like Clean Code, TDD and MDD. Our Stack: Backend: Microservices written in Python, Kotlin (Quarkus), NodeJS and deployed on AWS as Kubernetes pods (EKS). Data Stack: ELK (Elasticsearch, Logstash, Kibana) for the analytics platform; Airflow as pipeline orchestrator; Python as main language. DB: Postgres, Elasticsearch and Redis. CI/CD: Our code relies on GIT and Gitlab. IaC: Terraform. Cloud: AWS. Mandatory: At least a B2 level in English. Residing in one of these countries: Italy, France, Spain, Germany. Salary Range: 30k - 35k Are you still here? What are you waiting for to apply? Seniority level Mid-Senior level Employment type Full-time Job function Information Services J-18808-Ljbffr
-
Informazioni dettagliate sull'offerta di lavoro
Azienda: Buscojobs Località: Milano
Lombardia, MilanoAggiunto: 10. 3. 2025
Posizione lavorativa aperta
Diventa il primo a rispondere a un'offerta di lavoro!