Data Engineer- Milan, Rome, Turin
Do you want to improve your career Do you want to work on global projects in an international and multicultural environment
We represent the heart of progress with migration, implementation and digital transformation projects.
Are you up for the challenge
We are currently recruiting, for the Italian unit of a prestigious global Company, leader in the Industry (web search engine, mail provider, web services, hardware products ecc.
), based in Milan, Rome and Turin a Data Engineer.
Responsibilities for Data Engineer:
Build and maintain optimal data pipeline architecture that is scalable and adaptable.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements:
automating manual processes, optimizing data delivery, partnered with infrastructure team to design for greater scalability, etc.
Build and support the platform that is required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure SQL Server' technologies.
Support analytics tools (i. e.
MS Power BI) that utilize the data pipeline to provide actionable insights into operational efficiency and other key business performance metrics, and dashboards.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and MS Azure regions.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working knowledge with a variety of databases.
Experience building and optimizing big data' data pipelines, architectures, and data sets.
Experience performing rootcause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large, disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable big data' data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
Experience with big data tools:
SQL Server, Blob, SQL Gen2, Hadoop, Spark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Postgres and Mongo DB.
Experience with data pipeline and workflow management tools:
Azkaban, Luigi, Airflow, MS Synapse ETL, etc.
Experience with MS Azure cloud services:
Synapse Analytics, Data Factory, SQL Gen2, Blob storage
Experience with stream-processing systems:
Storm, Spark-Streaming, Data Streaming, etc.
Experience with object-oriented/object function scripting languages:
Python, Java, C--, Scala, etc.
Experience in DevOps, IBM Cloud, IBM DB2, IBM DataStage, DBA are preferred but not required.
We want you to keep growing, and we'll provide to make that happen!
Data inizio prevista:
//
Categoria Professionale:
Infrastruttura / DBA
Citt:
Milano (Milano)
Disponibilit oraria:
Full Time
- (Aut.
Min.
Prot.
N. 1100-SG del. . ).
Diventa il primo a rispondere a un'offerta di lavoro!
-
Perché cercare un lavoro con PostiVacanti.it?
Ogni giorno nuove offerte di lavoro È possibile scegliere tra un'ampia gamma di lavori: il nostro obiettivo è quello di offrire la più ampia selezione possibile Ricevi nuove offerte via e-mail Essere i primi a rispondere alle nuove offerte di lavoro Tutte le offerte di lavoro in un unico posto (da datori di lavoro, agenzie e altri portali) Tutti i servizi per le persone in cerca di lavoro sono gratuiti Vi aiuteremo a trovare un nuovo lavoro