Back to Careers

Data Engineer

Date Posted: 
Employment Type: 
Job ID: 

Data Engineer:

• Iterative. They are excited to prototype at all levels of fidelity—and have the humility to walk away from ideas when they fail.
• Collaborative. They have the ability and enthusiasm to work with researchers, engineers, business consultants, and other designers who will challenge and support one another.
• Comfortable with ambiguity. They know projects and businesses move fast. That means the path forward isn’t always well-defined. They are comfortable and collaborative through our process.
• Interdisciplinary. They deliver data products for digital solutions, deploy analytical models into production, fix existing data platforms, or coach and enable other teams in best practices depending on need.


• Working with a diverse set of clients across domains and industries
• Implement data orchestration pipelines, data sourcing, cleansing, and augmentation and quality control processes
• Deploying machine learning models in production
• Able to operate individually with minimal oversight
• Mentoring data engineers to further their personal and professional growth
• Leading other engineering staff on projects
• Developing team's talent by providing direction and facilitating technical architectural discussions
o Assisting with business development through writing proposals, scoping projects
o Contributing to our thought leadership through written publications and speaking at events and conferences
• Translating business needs into solutions
• Contribution to overall solution, integration, and enterprise architectures


• 8+ years of experience working on large scale, full lifecycle data implementation projects
• BS/BA in data engineering, software engineering, data science, computer science, applied mathematics, or equivalent experience
• 2+ years professional development experience with some of the AWS/Azure/GCP stack:
o Databricks
o IoTHub
o EventHub
o Docker
o S3
o Redshift
o AWS glue
o Azure Data Warehouse
o Azure Blob Store
o Google Big Query
• 2+ years of experience in a client facing role
• Subject matter expert in at least one area related to data management
o An RDBMS technology
o a Big Data technology
• A deep knowledge of performant SQL and understanding of relational database technology
• Hands-on RDBMS experience (data modeling, analysis, programming, stored procedures)
• Expertise in developing ETL/ELT workflows with one or more of the following:
o Python
o Scala
o Java
• Deployment of data pipelines in the Cloud in at least AWS, Azure, or GCP
• A deep understanding of relational and warehousing database technology, working with at least one of the major databases platforms (Oracle, SQLServer, Teradata, MySQL, Postgres)

Additional consideration to candidates who possess some of the following criteria:

• Experience working with Streaming data
• A solid foundation in data structures, algorithms, and OO Design with fundamentally strong programming skills
• Proven success working in and promoting a rapidly changing, collaborative, and iterative product development environment
• Strong interpersonal and analytical skills
• Intellectual curiosity and an ability to execute projects
• An understanding of “big picture” business requirements that drive architecture and design decisions
• DevOps and DataOps skills including “infrastructure as code” systems like CloudFormation or Terraform
• Data system performance tuning
• Implementation of predictive analytics and machine learning models (MLlib, scikit-learn, etc)
• Willingness to travel around the globe to work with clients and BCG teams. At times, this role involves significant travel to client sites. The amount of travel will depend on client needs and nature of projects