程序代写代做代考 algorithm Hive python prolog hadoop Java Introduction to Big Data! with Apache Spark”

Introduction to Big Data! with Apache Spark”
UC#BERKELEY#

This Lecture” The Big Data Problem”
Hardware for Big Data”
Distributing Work”
Handling Failures and Slow Machines” Map Reduce and Complex Jobs” Apache Spark”

Some Traditional Analysis Tools” • Unix shell commands, Pandas, R”
All run on a ! single machine”!

• •
• •
The Big Data Problem”
Data growing faster than computation speeds” Growing data sources”
» Web, mobile, scientific, …” Storage getting cheaper”
» Size doubling every 18 months”
But, stalling CPU speeds and storage ! bottlenecks”

Big Data Examples”
• Facebook’s daily logs: 60 TB”
• 1,000 genomes project: 200 TB”
• Google web index: 10+ PB”
• Cost of 1 TB of disk: ~$35″
• Time to read 1 TB from disk: 3 hours ! (100 MB/s)”

The Big Data Problem”
• A single machine can no longer process or even store all the data!”
• Only solution is to distribute data over large clusters”

Google Datacenter”
How do we program this thing?!

Hardware for Big Data”
Lots of hard drives” … and CPUs”

Hardware for Big Data”
One big box?” (1990’s solution)”
But, expensive”
» Low volume”
» All “premium” hardware”
And, still not big enough!! Image: Wikimedia Commons / User:Tonusamuel

Hardware for Big Data”
Consumer-grade hardware” “Not “gold plated””

Many desktop-like servers”
“Easy to add capacity”
“Cheaper per CPU/disk” ”
Complexity in software”

Image: Steve Jurvetson/Flickr

Problems with Cheap Hardware”
Failures, Google’s numbers:”
“1-5% hard drives/year”
“0.2% DIMMs/year” ”
Network speeds versus shared memory”
“Much more latency”
“Network slower than storage” ”
Uneven performance” ”

What’s Hard About Cluster Computing?” • How do we split work across machines?”

How do you count the number of occurrences of each word in a document?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?””
I: 3″ am: 3″ Sam: 3″ do: 1″ you: 1″ like: 1″ …”

One Approach: Use a Hash Table”
{}”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?””

One Approach: Use a Hash Table”
{I :1}”
am Sam” I am Sam” Sam I am”
Do you like” Green eggs and ham?””
“I

One Approach: Use a Hash Table”
{I: 1,” am: 1}”
“I Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?””
am

One Approach: Use a Hash Table”
{I: 1,” am: 1,” Sam: 1}”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?””

One Approach: Use a Hash Table”
{I: 2,”
am: 1,”
“I am Sam” am Sam” Sam I am”
I
Sam: 1}” Green eggs and ham?””
Do you like”

What if the Document is Really Big?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””

What if the Document is Really Big?”
Machines 1- 4″
What’s the problem with this approach?!
{I: 3,” am: 3,” Sam: 3″
{do: 2, … }”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
Machine 5″
{I: 6,” am: 4,” Sam: 4,” do: 3″ … }”
{Sam: 1,” … }”
{Would:1, … }”

What if the Document is Really Big?”
Machines 1- 4″
Results have to fit on one machine!
{I: 3,” am: 3,” Sam: 3″
{do: 2, … }”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
Machine 5″
{I: 6,” am: 4,” Sam: 4,” do: 3″ … }”
{Sam: 1,” … }”
{Would:1, … }”

What if the Document is Really Big?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
Can add aggregation layers but results still must fit on one machine!
{I: 3,” am: 3,” Sam: 3″
{do: 2, … }”
{I: 4,” am: 3, … }”
{I: 6,” am: 3,”
you: 2, not: 1, … }”
{Sam: 1,” … }”
{Would:1, … }”
{I: 2,” do: 1,” … }”

What if the Document is Really Big?”
{I: 1,” am: 1,” …}”
{do: 1,” you: 1, ” …}”
{Would: 1,” you: 1,” …}”
{Would: 1,” you: 1,” …}”
{I: 6,” do:3,” … }”
Use Divide and Conquer!!!
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
Machines 1- 4″

What if the Document is Really Big?”
{I: 1,” am: 1,” …}”
{do: 1,” you: 1, ” …}”
{Would: 1,” you: 1,” …}”
{Would: 1,” you: 1,” …}”
{I: 6,” do:3,” … }”
{am:5,” Sam: 4,
… }”
{you: 2,” …}”
{Would: 1,” …}”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
Machines 1- 4″
Machines 1- 4″
Use Divide and Conquer!!!

{I: 6,” do:3,” … }”
{am:5,” Sam: 4,
… }”
{you: 2,” …}”
{Would: 1,” …}”
{I: 1,” am: 1,” …}”
Use Divide and Conquer!!!
{Would: 1,” you: 1,” …}”
What if the Document is Really Big?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
{do: 1,” you: 1, ” …}”
MAP”
{Would: 1,” you: 1,” …}”

Google !
Map Reduce 2004″
{I: 1,” am: 1,” …}”
{I: 6,” do:3,” … }”
{am:5,” Sam: 4,
{you: 2,” …}”
RED
… }”
UCE”
{Would: 1,” you: 1,” …}”
{Would: 1,” …}”
What if the Document is Really Big?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
{do: 1,” you: 1, ” …}”
MAP” {Would: 1,”
you: 1,” …}”
http://research.google.com/archive/mapreduce.html”

Map Reduce for Sorting”
{I: 1,” am: 1,” …}”
{do: 1,” you: 1, ” …}”
{Would: 1,” you: 1,” …}”
{Would: 1,” you: 1,” …}”
{1: would, 2: you,
… }”
{3: do,” 4: Sam,
… }”
{5: am, … }”
{6: I” …}”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
􏰀 2″ > 2,!
􏰀 4″ > 4,!
􏰀 5″ > 5″
“What word is used most?””

• •
What’s Hard About Cluster Computing?”
How to divide work across machines?” » Must consider network, data locality” » Moving data may be very expensive”
How to deal with failures?”
» 1 server fails every 3 years!with 10,000 nodes see 10 faults/day” » Even worse: stragglers (not failed, but slow nodes)”

How Do We Deal with Failures?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
{do: 1,” you: 1, ” …}”
{Would: 1,” you: 1,” …}”
{Would: 1,” you: 1,” …}”

How Do We Deal with Machine Failures?”
{I: 1,” am: 1,” …}”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
Launch another task!”
{do: 1,” you: 1, ” …}”
{Would: 1,” you: 1,” …}”
{Would: 1,” you: 1,” …}”

How Do We Deal with Slow Tasks?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””

How Do We Deal with Slow Tasks?”
“I am Sam”
I am Sam”
Sam I am”
Do you like” Green eggs and ham?” I do not like them” Sam I am”
I do not like” Green eggs and ham” Would you like them” Here or there?” …””
{I: 1,” am: 1,” …}”
Launch another task!”
{do: 1,” you: 1, ” …}”
{Would: 1,” you: 1,” …}”
{Would: 1,” you: 1,” …}”

Map Reduce: Distributed Execution”
MAP” REDUCE”
Each stage passes through the hard drives ”

Map Reduce: Iterative Jobs”
• Iterative jobs involve a lot of disk I/O for each repetition”
Disk I/O is very slow!”
Stage 1″
Stage 2″
Stage 3″


Apache Spark Motivation”
Using Map Reduce for complex jobs, interactive queries
and online processing involves lots of disk I/O!
…”
Interactive mining” Stream processing” Also, iterative jobs”
Query 1″
Query 2″
Query 3″
Disk I/O is very slow”
Job 1″ Job 2″

Tech Trend: Cost of Memor y”
Memor y”
disk”
http://www.jcmit.com/mem2014.htm ” YEAR”
Lower cost means can put more memory in each server”
2010: 1 ¢/MB”
flash”

PRICE” ”

Hardware for Big Data”
Lots of hard drives” … and CPUs” … and memory!”

Oppor tunity”
• Keep more data in-memory!
• Create new distributed execution engine:”
http://people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf ”

HDFS! read”
HDFS! write”
iteration 1″
HDFS! read”
HDFS! write”
Use Memory Instead of Disk”
iteration 2″
. . .”
Input”
query 1″
HDFS! read”
Input”
result 1″ result 2″ result 3″
query 2″
query 3″
. . .”

HDFS! read”
Input”
In-Memory Data Sharing”
iteration 2″
. . .”
iteration 1″
. . .”
query 1″
one-time” processing”
Distributed! memory”
result 1″ result 2″ result 3″
query 2″
query 3″
Input”
10-100x faster than network and disk”

Resilient Distributed Datasets (RDDs)”
• Write programs in terms of operations on distributed datasets”
• Partitioned collections of objects spread across a cluster, stored in memory or on disk ”
• RDDs built and manipulated through a diverse ! set of parallel transformations (map, filter, join)! and actions (count, collect, save)”
• RDDs automatically rebuilt on machine failure”

• •
The Spark Computing Framework”
Provides programming abstraction and parallel runtime to hide complexities of fault-tolerance and slow machines”
“Here’s an operation, run it on all of the data”” » I don’t care where it runs (you schedule that)”
» In fact, feel free to run it twice on different !
nodes”

Spark Tools”
Spark SQL”
Spark Streaming”
MLlib (machine
learning)”
GraphX (graph)”
Apache Spark”

Spark and Map Reduce Differences ”
Hadoop” Map Reduce”
Spark”
Storage”
Disk only”
In-memory or on disk”
Operations”
Map and Reduce”
Map,Reduce, Join, Sample, etc…”
Execution model”
Batch”
Batch, interactive, streaming”
Programming” environments”
Java”
Scala, Java, R, and Python”

Other Spark and Map Reduce Differences ”
• Generalized patterns!
􏰁 unified engine for many use cases”
• Lazy evaluation of the lineage graph!
􏰁 reduces wait states, better pipelining”
• Lower overhead for starting jobs”
• Less expensive shuffles”

In-Memory Can Make a Big Difference” Two iterative Machine Learning algorithms:”
• 4.1″
K-means Clustering”
121″
50″ 100″
Logistic Regression”
80″ 40″ 60″ 80″
Hadoop MR” Spark”
150″sec”
Hadoop MR” Spark”
100″ sec”
0″
0″
0.96″
20″

First Public Cloud Petabyte Sort”
http://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html!
Daytona Gray 100 TB
sort benchmark record (tied for 1st place)”

Spark Expertise Tops Big Data Median Salaries”
Over 800 respondents across 53 countries and 41 U.S. states”
http://www.oreilly.com/data/free/2014-data-science-salary-survey.csp ”

History Review”

Historical References”
• circa 1979 – Stanford, MIT, CMU, etc.: set/list operations in LISP, Prolog, etc., for parallel processing! http://www-formal.stanford.edu/jmc/history/lisp/lisp.htm”
• circa 2004 – Google: MapReduce: Simplified Data Processing on Large Clusters” Jeffrey Dean and Sanjay Ghemawat!
http://research.google.com/archive/mapreduce.html”
• circa 2006 – Apache Hadoop, originating from the Yahoo!’s Nutch Project! Doug Cutting!
http://research.yahoo.com/files/cutting.pdf”
• circa 2008 – Yahoo!: web scale search indexing! Hadoop Summit, HUG, etc.!
http://developer.yahoo.com/hadoop/”
• circa 2009 – Amazon AWS: Elastic MapReduce!
Hadoop modified for EC2/S3, plus support for Hive, Pig, Cascading, etc.!
http://aws.amazon.com/elasticmapreduce/”

Spark Research Papers”
• Spark: Cluster Computing with Working Sets”
Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, Ion Stoica! USENIX HotCloud (2010)” people.csail.mit.edu/matei/papers/2010/hotcloud_spark.pdf!
• Resilient Distributed Datasets: A Fault-Tolerant Abstraction for” In-Memory Cluster Computing”
Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, !
Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, ! Scott Shenker, Ion Stoica!
NSDI (2012)!
usenix.org/system/files/conference/nsdi12/nsdi12-final138.pdf”