hadoop-streaming
参考 https://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-
python/
跟上⾯面的程序⾮非常类似 它是统计word频率 我们是统计关键词频率 本质上是⼀一样的 只需要mapper改
⼀一下
hadoop mapreduce由 mapper 和 reducer 两个程序构成
mapper.py
reducer.py
#!/usr/bin/env python
import sys
for line in sys.stdin:
line = line.strip()
keys = line.split(“,”)[-1].split(“;”)
for key in keys:
key = key.lower().strip()
value = 1
print( “%s\t%d” % (key, value) )
#!/usr/bin/env python
import sys
last_key = None
running_total = 0
for input_line in sys.stdin:
try:
input_line = input_line.strip()
# print(input_line)
this_key, value = input_line.split(“\t”, 1)
value = int(value)
if last_key == this_key:
running_total += value
else:
if last_key:
# print(“%s\t%d” % (last_key, running_total))
https://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python
print(“%d\t%s” % (running_total, last_key))
running_total = value
last_key = this_key
except Exception as e:
# print(e)
pass
if last_key == this_key:
# print(“%s\t%d” % (last_key, running_total))
print(“%d\t%s” % (running_total, last_key))