Our classroom training provides you the opportunity to interact with instructors and benefit from face-to-face instruction.
Big Data Bootcamp
High quality training from
Certified & Industry Experts
Earn 16 PDUs
Course Completion Certificates
Extensive documentation provided
Reimbursement guaranteed if the training is not satisfied
Our approach is both practical and inspirational. Our training is carefully designed and tested to develop the key skills and confidence needed while being highly participatory and fun
This big data training course will provide a technical overview of Apache Hadoop for project managers, business managers and data analysts. Students will understand the overall big data space, technologies involved and will get a detailed overview of Apache Hadoop. The course will expose students to real world use cases to comprehend the capabilities of Apache Hadoop. Students will also learn about YARN and HDFS and how to develop applications and analyze Big Data stored in Apache Hadoop using Apache Pig and Apache Hive. Each topic will provide hands on experience to the students.
Introduction to Big Data
- ● Big Data – beyond the obvious trends
- ● Exponentially increasing data
- ● Big data sources
- ● Data warehousing, business intelligence, analytics, predictive statistics, data science
Survey of Big Data technologies
- ● First generation systems
- ● Second generation systems
- ● Enterprise search
- ● Visualizing and understanding data with processing
- ● NOSQL databases
- ● Apache Hadoop
Introduction to Hadoop
- ● What is Hadoop? Who are the major vendors?
- ● A dive into the Hadoop Ecosystem
- ● Benefits of using Hadoop
- ● How to use Hadoop within your infrastructure?
Introduction to MapReduce
- ● What is MapReduce?
- ● Why do you need MapReduce?
- ● Using Mapreduce with Java and Ruby
Introduction to Yarn
- ● What is Yarn?
- ● What are the advantages of using Yarn over classical MapReduce?
- ● Using Yarn with Java and Ruby
Introduction to HDFS
- ● What is HDFS?
- ● Why do you need a distributed file system?
- ● How is a distributed file system different from a traditional file system?
- ● What is unique about HDFS when compared to other file systems?
- ● HDFS and reliability?
- ● Does it offer support for compressions, checksums and data integrity?
- ● Why do you need to transform data?
- ● What is Pig?
- ● Use cases for Pig
Structured Data Analysis?
- ● How do you handle structured data with Hadoop?
- ● What is Hive/HCatalog?
- ● Use cases for Hive/HCatalog
Loading data into Hadoop
- ● How do you move your existing data into Hadoop?
- ● What is Sqoop?
Automating workflows in Hadoop
- ● Benefits of Automation
- ● What is oozie?
- ● Automatically running workflows
- ● Setting up workflow triggers
Exploring opportunities in your own organization
- ● Framing scenarios
- ● Understanding how to ask questions
- ● Tying possibilities to your own business drivers
- ● Common opportunities
- ● Real world examples
- ● How to use Yarn within Hadoop?
- ● Overview of HDFS commands
- ● Hands-on activities with Pig
- ● Hands-on activities with Hive/HCatalog
- ● Hands-on activities with Sqoop
- ● Demonstration of Oozie
- ● Learn about the big data ecosystem
- ● Understand the benefits and ROI you can get from your existing data
- ● Learn about Hadoop and how it is transforming the workspace
- ● Learn about MapReduce and Hadoop Distributed File system
- ● Learn about using Hadoop to identify new business opportunities
- ● Learn about using Hadoop to improve data management processes
- ● Learn about using Hadoop to clarify results
- ● Learn about using Hadoop to expand your data sources
- ● Learn about scaling your current workflow to handle more users and lower your overall performance cost
- ● Learn about the various technologies that comprise the Hadoop ecosystem
- ● Learn how to write a simple mapreduce job from Java or your favorite programming language
- ● Learn how to use a very simple scripting language to transform your data
- ● Learn how to use a SQL like declarative language to analyze large quantities of data
- ● Learn how to connect your existing data warehouse to the Hadoop ecosystem
- ● Learn how to move your data to the Hadoop ecosystem
- ● Learn how to move the results of your data analysis to Business Intelligence Tools like Tableaux
- ● Learn how to automate your workflow using oozie
- ● Learn about polyglot persistence and identifying the right tool for the right job
- ● Learn about future trends in Big data and technologies to keep an eye on
- ● Discover tips and tricks behind successful Hadoop deployments
- ● Introduction to Big Data
- ● Introduction to Hadoop
- ● Hadoop Distributed File System (HDFS)
- ● MapReduce
- ● YARN
- ● Pig
- ● Hive
- ● Sqoop
- ● Oozie
Anybody who is involved with databases, data analysis, wondering how to deal with the mountains of data (anywhere gigabytes of user/log data etc to petabytes will benefit from this program). This course is perfect for:
- ● Business Analysts
- ● Software Engineers
- ● Project Managers
- ● Data Analysts
- ● Business Customers
- ● Team Leaders
- ● System Analysts
No prior knowledge of big data and/or Hadoop is required for this class. Some prior programming experience is a plus for this class, but not necessary.
What does Mangates provide me on the day of course?
We provide Course Materials, Course Completion Certificate and Refreshments.
What experience does instructor has?
All our Instructors are Certified & Industry Experts and they have years of experience in the same filed.
Do you provide a group discount for classroom training programs?
We do Provide Group Discounts such as 10% for group of 3, 15% for group of 5 and 20% for the group of 10 people.
If I cancel my Enrolment, how can I claim my Refund?
You can request a refund by sending an email to firstname.lastname@example.org and within 7 working days you get your money back.