To apply please visit:

[ Link removed ] – Click here to apply to Senior Engineer Data Platforms (Hybrid)

We are seeking an immediate need for a Senior Engineer of Data Platforms to join a world-renowned management consulting firm. This is a direct hire, hybrid capable role. You will join their Waltham or Atlanta office and be a part of their One Firm Tech-Cloud Data & Analytics team. You will work with product managers, software engineers, architects, and various platform teams. You’ll be part of a team responsible for delivering technology enabled solutions of the future. You will be involved in all business value chain activities from understanding product needs to product development to on-going maintenance and enhancement.What You Will Do:

You will be responsible for creating innovative interoperability platforms, tools, and solutions to enable seamless and secure data integration.

In this role, your solutions will be used to connect legacy, newly developed, and vendor applications across the datacenter and cloud environments, and you will be responsible for the full lifecycle of the solutions.

You will be developing specifications, designing infrastructure and interfaces, developing code. As Engineers & Senior Engineers you will be 80% hands on coding and Senior Engineers will also Coach & Mentor.

You will be responsible to design and build scalable, secured ETL pipelines in PySpark.

You will develop complex PySpark code using SparkSQL, Dataframes, joins, transposes etc. to load data to a MPP Datawarehouse: Snowflake.

You will utilize your good understanding of Python and Spark coding concepts e.g., SparkSQL, Dataframes, joins, transposes etc.

You will create ETL data pipelines using PySpark to read Kafka topics, RDBMS, APIs and other sources and load into object storage (e.g., S3 etc.)

What Gets You The Job:

Bachelors/Master’s degree in technology related field

5 years of IT experience with about 3 years in data engineering and ETL/ELT

Experience in Java/J2EE tech stack.

Experience in designing and developing data pipelines using PySpark in any Public Cloud e.g., AWS, GCP, Azure etc. or hybrid environments.

Proficient in SQL, data modelling and data warehouse concepts.

Proficient in developing Microservices using Java, SpringBoot and REST API Creation and Consumption.

Conceptual understanding of modern software engineering patterns, including those used in highly scalable, distributed, and resilient systems.

Solid understanding of NoSQL databases like MongoDB and ElasticSearch, experience in Kubernetes, docker and CI/CD pipeline configuration.

Experience in developing and delivering systems on AWS cloud platform or equivalent.

Experience working on AWS SDK and Lambdas, any MPP data warehouses e.g., Snowflake, Big Query, Redshift, GraphQL API development, AWS Glue, Glue Studio, Blueprints

Experience in implementing effective and successful Cloud based Data Migration and Data Integration strategies

Please send your resume to Colin Crane for immediate considerationIrvine Technology Corporation (ITC) is a leading provider of technology and staffing solutions for IT, Security, Engineering, and Interactive Design disciplines servicing startups to enterprise clients, nationally. We pride ourselves in the ability to introduce you to our intimate network of business and technology leaders – bringing you opportunity coupled with personal growth, and professional development! Join us. Let us catapult your career! Irvine Technology Corporation provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Irvine Technology Corporation complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities.

To apply please visit:

[ Link removed ] – Click here to apply to Senior Engineer Data Platforms (Hybrid)