Skip to main content

Generation Data Academy

Last updated: 2022-05-23

Overview

The Generation Infinity Works Data Engineering programme is a 12-week course designed to take people from all backgrounds, with little to no experience, and help them to start a career in data engineering.

The programme has been developed in partnership with the global employment charity Generation. Infinity Works provides the technical curriculum and also the instructors to lead the delivery of the programme. Generation provides soft skills and employability content designed to help prepare the learners for employment.

The mission of the programme is ultimately to get each learner into employment as an entry-level data engineer. Our key part of this at Infinity Works is to help them become well-rounded, adaptable Data Engineers that understand their role, responsibilities, and ways of working.

Programme structure

The 12-week course is split into two halves:

  • Foundations & mini-project (weeks 1-6): introducing Python programming, CRUD data operations, simple data storage, and basic application design from the ground up. This also includes introducing the learners to associated professional programming practices such as version control, unit testing, using IDEs, and the Unix shell. The majority of the time is spent delivering taught sessions and guided workshops, teaching learners the skills and concepts and practising these through exercises. In parallel, the learners then apply these new skills and knowledge to a mini-project, a simple CLI application that they build individually and which progresses incrementally over the six weeks. This helps them both solidify and apply learning, and builds confidence in what they can achieve.
  • Advanced Concepts & Final Project (weeks 7-12): building on the core skills learnt in the first half, the lessons now move on to focus more specifically around data engineering technologies, techniques and tools, together with agile delivery and cloud infrastructure. Concepts such as data normalisation, data cleansing and ETL are introduced, together with data warehousing, data streaming and data queues. These concepts are delivered via taught sessions, but more time in the second half is also dedicated to working together in teams on a final team project with a shared codebase per team. The final team project involves building up an ETL data pipeline to process, analyse, and visualise data.