Description

Responsibilities:

• Rapidly architect, design, prototype, and implement and optimize architectures to tackle the Big Data and Data Science needs for a variety of Fortune 1000 corporations and other major organizations; develop modular code base to solve “real” world problems and conduct regular peer code reviews to ensure code quality and compliance following best practices in the industry.

• Work in cross-disciplinary teams with KPMG industry experts to understand client needs and ingest rich data sources (social media, news, internal/external documents, emails, financial data, and operational data).

• Research, experiment, and utilize leading Big Data methodologies (Hadoop, Spark, Kafka, Netezza, SAP HANA, and AWS) with cloud/on premise/hybrid hosting solutions; provide expert documentation and operating guidance for users of all levels.

• Translate advanced business analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful visualizations, reports, and presentations.

• Develop skills in business requirement capture and translation, hypothesis-driven consulting, work stream and project management, and client relationship development.

• Help drive the process for pursuing innovations, target solutions, and extendable platforms for Lighthouse, KPMG, and clients; participate in developing and presenting thought leadership, and assist in ensuring that the Lighthouse technology stack incorporates and is optimized for using specific technologies; help promote KPMG brand in the broader data analytics community.

 

Qualifications:

• BS/MS in Computer Science, Computer Engineering, or related field and minimum four years of big data experience with multiple programming languages and technologies; or PhD in Computer Science, Computer Engineering, or related field; two or more years of experience related to professional services is preferred.

• Expert ability to rapidly ingest, transform, engineer, and visualize data for both ad hoc and product-level (e.g., automated) data & analytics solutions; expertise with programming methodologies (version control, testing, and QA) and development methodologies (Waterfall and Agile); ability to work efficiently under Unix/Linux environment or .NET, experience with source code management systems like GIT and SVN.

• Market-leading fluency in several programming languages (Python, Scala, or Java) with the ability to pick up new languages and technologies quickly; understanding of cloud and distributed systems principles (load balancing, networks, scaling, in-memory vs. disk, etc.); and experience with large-scale big data methods (MapReduce, Hadoop, Spark, Hive, Impala, or Storm); full-stack development capability is preferred.

• Have experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures; familiarity with different architecture patterns of development such as Event Driven, SOA, micro services, functional programming, Lambda, etc.; capability to architect highly scalable distributed systems, using different open source tools.

 

Education

Any Gradute