The Generation Data Engineering programme is a 12-week course designed to take people from all backgrounds with little to no experience and help them start a career in data engineering via the global employment charity Generation.
The programme was developed with the global employment charity Generation. Accenture and Next Generation Engineering provide the technical curriculum and the instructors to lead the delivery of the programme. Generation offers soft skills and employability content designed to help prepare the learners for employment.
Ultimately, the programme aims to get each learner employed as an entry-level data engineer. Our key part in this is to help them become well-rounded, adaptable data engineers who understand their roles, responsibilities, and ways of working.
This overlaps with the same material that the School of Tech delivers for Accenture, Next Generation, and our clients - see also Data Engineering Academy.
The 12-week course is split into two halves:
- Foundations & mini-project (weeks 1-6): introducing Python programming, CRUD data operations, simple data storage, and basic application design from the ground up. This also includes introducing the learners to associated professional programming practices such as version control, unit testing, using IDEs, and the Unix shell. The majority of the time is spent delivering taught sessions and guided workshops, teaching learners the skills and concepts and practising these through exercises. In parallel, the learners then apply these new skills and knowledge to a mini-project, a simple CLI application that they build individually and which progresses incrementally over the six weeks. This helps them both solidify and apply learning, and builds confidence in what they can achieve.
- Advanced Concepts & Final Project (weeks 7-12): building on the core skills learnt in the first half, the lessons now move on to focus more specifically around data engineering technologies, techniques and tools, together with agile delivery and cloud infrastructure. Concepts such as data normalisation, data cleansing and ETL are introduced, together with data warehousing, data streaming and data queues. These concepts are delivered via taught sessions, but more time in the second half is also dedicated to working together in teams on a final team project with a shared codebase per team. The final team project involves building up an ETL data pipeline to process, analyse, and visualise data.