월 26,400원
5개월 할부 시다른 수강생들이 자주 물어보는 질문이 궁금하신가요?
- 미해결15일간의 빅데이터 파일럿 프로젝트
HBase에 적재가 안됩니다...
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. 5.6 실시간 적재 파일럿 실행 4단계 적재 테스트에서 HBase에 적재가 안됩니다..! '' 큐앤에이를 보고 입력해보았는데 여기 오류가 있는 것 같습니다. 그리고 카프카까지는 데이터가 들어오는데 storm에서 topology가 아무것도 보이지가 않습니다. redis에도 데이터가 들어오지 않습니다. 어디에서 에러가 난 것일까요?
- 미해결15일간의 빅데이터 파일럿 프로젝트
로그 서버
초보개념 잡기 위한 추가 질문인데요 흔히 웹개발할 때 말하는 로그서버라는 것은 강의중에 수집-적재에 해당하는 부분과 같은 역할 일까요? 빅데이터와 로그서버의 개념을 정리부탁드립니다. 너무 헷갈려서요.감사합니다.
- 미해결15일간의 빅데이터 파일럿 프로젝트
rdbms 와 big data의 차이
결론적으로 가장 큰 차이는 rdbms는 현재의 state를 저장 big data는 state들의 과거 시계열을 모두 저장 이차이가 핵심일까요?
- 미해결15일간의 빅데이터 파일럿 프로젝트
혹시 [참고] 붙은 환결설정 ppt 교육자료 공유 가능할까요>?
안녕하세요 유익한 강의로 많은 도움 받고 있습니다. 혹시 [참고] 붙은 환경설정 가이드 ppt 파일 공유 가능할지 문의드립니다.
- 미해결15일간의 빅데이터 파일럿 프로젝트
cluster 설정 관련문의
안녕하세요 cm 들어가서 .... cluster1이 이미 존재한다고 하고 cluster 2로 하니... 설정가능한 호스트 탭창이 보이지 않습니다. 어떻게 해야할까요...
- 미해결15일간의 빅데이터 파일럿 프로젝트
NAT 를 사용하는 이유가 궁금합니다.
안녕하세요 유익한 강의 감사합니다^^ 질문이 있는데 NAT는 호스트간 데이터 전송이 되지 않는 것으로 알고 있고, NAT NETWORK가 호스트간 데이터 정송이 가능한 것으로 알 고 있습니다....(잘못 알고 잇으면 시정 부탁드립니다.) 그래서 처음 환경설정에서도 NAT NETWORK를 생성하셨던 걸로 이해했습니다... 추가적으로 처음에 NAT NETWORK CIDR 은 10.~ 이고 호스트 IP는 196.~ 인데... 이걸 서로 일치시켜주지 않아도 정상적으로 작동되는 것 같습니다만... 원리적으로는 일치시켜주어야 하는 것 아닌가 문의드립니다....
- 미해결15일간의 빅데이터 파일럿 프로젝트
클라우데라 매니져 설치시 오류
안녕하세요 선생님, 이제 막 강의를 듣기 시작해서 관련 파일 설치 중인데, 클라우데라 매니져 설치 중 다음과 같은 오류가 발생해서 질문 드립니다. 여러번 재설치 시도 했는데 해결이 안되서 문의드립니다. 도움 주시면 감사하겠습니다.
- 미해결15일간의 빅데이터 파일럿 프로젝트
yum install 시 Couldn't resolve host 에러
안녕하세요! 현재 적재2 파트 redis설치 강의 진행중입니다. 실습중 yum install gcc* 와같은 yum install 실행시 Couldn't resolv host 에러가 발생합니다. 구글링을 해보니 DNS문제라는데 resolv.conf 파일에서 nameserver 192.168.56.102 , 8.8.8.8 등을 설정해주어도 작동하지않습니다 더이상 해결방법을 알지못해 문의 남깁니다ㅠㅠ 도움주시면 감사하겠습니다 ㅠㅠ
- 해결됨15일간의 빅데이터 파일럿 프로젝트
스쿱을 이용한 분석결과 외부 제공
실습을 쭉 잘 진행해오다 7장 분석에서 제가 막히는 부분이 많이 생기네요.postgreSQL JDBC 드라이버를 스쿱의 라이브러리 경로에 복사하기 위해 cp명령을 실행하면 "No such file or directory" 에러가 발생합니다. 그래서 일단, 파일질라로 파일을 해당위치에 일단 복사해서 저장해두고 스쿱내보내기 명령을 실행해보았습니다. 역시 에러가 발생했습니다.
- 해결됨15일간의 빅데이터 파일럿 프로젝트
제플린을 이용한 실시간 분석 - 스파크 스칼라 코드
제플린 노트북에서 스파크 데이터를 로드하고 로드한 데이터셋을 스파크에서 활용하기 위해 데이터구조로 만드는 부분에서도 에러가 나타납니다. 제가 무엇을 잘못했을까요?
- 해결됨15일간의 빅데이터 파일럿 프로젝트
분석 실행 5단계 - 머하웃 추천 - 스마트카 차량용품 추천
머하웃 추천기의 입력 데이터로 사용하기 위해 HDFS에 경로분석 5단계 - 머하웃과 추천 - 스마트카 차량용품 추천을 실습하고 있습니다. 실습진행중에 제가 무엇을 잘못 했는지 잘 실행중이었는데 에러가 발생하기 시작했습니다. 1) “스마트카 차량용품 구매 이력”(managed_smartcar_item_buylist_info) 데이터를 머하웃 추천기에서 사용가능한 형식으로 재구성한 다음, 하이브 에디터에서 파일생성 쿼리 실행 ok. 2) 하이브에서 로컬디스크로 생성된 추천기의 입력 데이터로 사용될 파일이 정상적으로 만들어졌는지 확인 more 명령으로 확인 ok. 3) 생성한 파일을 머하웃 추천기의 입력 데이터로 사용하기 위해 HDFS에 경로 생성하고 파일 저장 ok. 4) 머하웃 추천 분석기를 실행하면....... 처음에는 잘 실행되는 듯 하다 결국 에러 메시지가 나타납니다. 명령실행후 처음 메시지부터 마지막 메시지까지 첨부하였습니다. [root@server02 pilot-pjt]# mahout recommenditembased -i /pilot-pjt/mahout/recommendation/input/item_buylist.txt -o /pilot-pjt/mahout/recommendation/output/ -s SIMILARITY_COCCURRENCE -n 3 Running on hadoop, using /usr/bin/hadoop and HADOOP_CONF_DIR= MAHOUT-JOB: /home/pilot-pjt/mahout/mahout-examples-0.13.0-job.jar WARNING: Use "yarn jar" to launch YARN applications. 21/11/21 03:30:22 INFO AbstractJob: Command line arguments: {--booleanData=[false], --endPhase=[2147483647], --input=[/pilot-pjt/mahout/recommendation/input/item_buylist.txt], --maxPrefsInItemSimilarity=[500], --maxPrefsPerUser=[10], --maxSimilaritiesPerItem=[100], --minPrefsPerUser=[1], --numRecommendations=[3], --output=[/pilot-pjt/mahout/recommendation/output/], --similarityClassname=[SIMILARITY_COCCURRENCE], --startPhase=[0], --tempDir=[temp]} 21/11/21 03:30:22 INFO AbstractJob: Command line arguments: {--booleanData=[false], --endPhase=[2147483647], --input=[/pilot-pjt/mahout/recommendation/input/item_buylist.txt], --minPrefsPerUser=[1], --output=[temp/preparePreferenceMatrix], --ratingShift=[0.0], --startPhase=[0], --tempDir=[temp]} 21/11/21 03:30:22 INFO deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 21/11/21 03:30:22 INFO deprecation: mapred.compress.map.output is deprecated. Instead, use mapreduce.map.output.compress 21/11/21 03:30:22 INFO deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 21/11/21 03:30:23 INFO RMProxy: Connecting to ResourceManager at server01.hadoop.com/192.168.56.101:8032 21/11/21 03:30:28 INFO JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1637424392789_0002 21/11/21 03:30:32 INFO FileInputFormat: Total input files to process : 1 21/11/21 03:30:32 INFO JobSubmitter: number of splits:1 21/11/21 03:30:33 INFO JobSubmitter: Submitting tokens for job: job_1637424392789_0002 21/11/21 03:30:33 INFO JobSubmitter: Executing with tokens: [] 21/11/21 03:30:34 INFO Configuration: resource-types.xml not found 21/11/21 03:30:34 INFO ResourceUtils: Unable to find 'resource-types.xml'. 21/11/21 03:30:34 INFO YarnClientImpl: Submitted application application_1637424392789_0002 21/11/21 03:30:34 INFO Job: The url to track the job: http://server01.hadoop.com:8088/proxy/application_1637424392789_0002/ 21/11/21 03:30:34 INFO Job: Running job: job_1637424392789_0002 21/11/21 03:31:34 INFO Job: Job job_1637424392789_0002 running in uber mode : false 21/11/21 03:31:34 INFO Job: map 0% reduce 0% 21/11/21 03:32:18 INFO Job: map 100% reduce 0% 21/11/21 03:32:43 INFO Job: map 100% reduce 100% 21/11/21 03:32:44 INFO Job: Job job_1637424392789_0002 completed successfully 21/11/21 03:32:45 INFO Job: Counters: 54 File System Counters FILE: Number of bytes read=269 FILE: Number of bytes written=442733 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=2072067 HDFS: Number of bytes written=643 HDFS: Number of read operations=8 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 HDFS: Number of bytes read erasure-coded=0 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=40610 Total time spent by all reduces in occupied slots (ms)=18665 Total time spent by all map tasks (ms)=40610 Total time spent by all reduce tasks (ms)=18665 Total vcore-milliseconds taken by all map tasks=40610 Total vcore-milliseconds taken by all reduce tasks=18665 Total megabyte-milliseconds taken by all map tasks=41584640 Total megabyte-milliseconds taken by all reduce tasks=19112960 Map-Reduce Framework Map input records=94178 Map output records=94178 Map output bytes=941780 Map output materialized bytes=265 Input split bytes=151 Combine input records=94178 Combine output records=30 Reduce input groups=30 Reduce shuffle bytes=265 Reduce input records=30 Reduce output records=30 Spilled Records=60 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=475 CPU time spent (ms)=2570 Physical memory (bytes) snapshot=503918592 Virtual memory (bytes) snapshot=5090603008 Total committed heap usage (bytes)=365957120 Peak Map Physical memory (bytes)=366530560 Peak Map Virtual memory (bytes)=2539433984 Peak Reduce Physical memory (bytes)=137388032 Peak Reduce Virtual memory (bytes)=2551169024 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=2071916 File Output Format Counters Bytes Written=643 21/11/21 03:32:45 INFO RMProxy: Connecting to ResourceManager at server01.hadoop.com/192.168.56.101:8032 21/11/21 03:32:45 INFO JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1637424392789_0003 21/11/21 03:32:47 INFO FileInputFormat: Total input files to process : 1 21/11/21 03:32:48 INFO JobSubmitter: number of splits:1 21/11/21 03:32:49 INFO JobSubmitter: Submitting tokens for job: job_1637424392789_0003 21/11/21 03:32:49 INFO JobSubmitter: Executing with tokens: [] 21/11/21 03:32:49 INFO YarnClientImpl: Submitted application application_1637424392789_0003 21/11/21 03:32:49 INFO Job: The url to track the job: http://server01.hadoop.com:8088/proxy/application_1637424392789_0003/ 21/11/21 03:32:49 INFO Job: Running job: job_1637424392789_0003 21/11/21 03:33:27 INFO Job: Job job_1637424392789_0003 running in uber mode : false 21/11/21 03:33:27 INFO Job: map 0% reduce 0% 21/11/21 03:33:51 INFO Job: map 100% reduce 0% 21/11/21 03:34:08 INFO Job: map 100% reduce 100% 21/11/21 03:34:08 INFO Job: Job job_1637424392789_0003 completed successfully 21/11/21 03:34:08 INFO Job: Counters: 55 File System Counters FILE: Number of bytes read=402933 FILE: Number of bytes written=1248777 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=2072067 HDFS: Number of bytes written=522978 HDFS: Number of read operations=8 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 HDFS: Number of bytes read erasure-coded=0 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=21247 Total time spent by all reduces in occupied slots (ms)=13637 Total time spent by all map tasks (ms)=21247 Total time spent by all reduce tasks (ms)=13637 Total vcore-milliseconds taken by all map tasks=21247 Total vcore-milliseconds taken by all reduce tasks=13637 Total megabyte-milliseconds taken by all map tasks=21756928 Total megabyte-milliseconds taken by all reduce tasks=13964288 Map-Reduce Framework Map input records=94178 Map output records=94178 Map output bytes=1224314 Map output materialized bytes=402929 Input split bytes=151 Combine input records=0 Combine output records=0 Reduce input groups=2449 Reduce shuffle bytes=402929 Reduce input records=94178 Reduce output records=2449 Spilled Records=188356 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=298 CPU time spent (ms)=2660 Physical memory (bytes) snapshot=503132160 Virtual memory (bytes) snapshot=5092700160 Total committed heap usage (bytes)=365957120 Peak Map Physical memory (bytes)=372359168 Peak Map Virtual memory (bytes)=2539433984 Peak Reduce Physical memory (bytes)=130772992 Peak Reduce Virtual memory (bytes)=2553266176 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=2071916 File Output Format Counters Bytes Written=522978 org.apache.mahout.cf.taste.hadoop.item.ToUserVectorsReducer$Counters USERS=2449 21/11/21 03:34:08 INFO RMProxy: Connecting to ResourceManager at server01.hadoop.com/192.168.56.101:8032 21/11/21 03:34:08 INFO JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1637424392789_0004 21/11/21 03:34:10 INFO FileInputFormat: Total input files to process : 1 21/11/21 03:34:10 INFO JobSubmitter: number of splits:1 21/11/21 03:34:10 INFO JobSubmitter: Submitting tokens for job: job_1637424392789_0004 21/11/21 03:34:10 INFO JobSubmitter: Executing with tokens: [] 21/11/21 03:34:10 INFO YarnClientImpl: Submitted application application_1637424392789_0004 21/11/21 03:34:10 INFO Job: The url to track the job: http://server01.hadoop.com:8088/proxy/application_1637424392789_0004/ 21/11/21 03:34:10 INFO Job: Running job: job_1637424392789_0004 21/11/21 03:34:24 INFO Job: Job job_1637424392789_0004 running in uber mode : false 21/11/21 03:34:24 INFO Job: map 0% reduce 0% 21/11/21 03:34:33 INFO Job: Task Id : attempt_1637424392789_0004_m_000000_0, Status : FAILED [2021-11-21 03:34:31.664]Container killed on request. Exit code is 137 [2021-11-21 03:34:31.666]Container exited with a non-zero exit code 137. [2021-11-21 03:34:31.675]Killed by external signal 21/11/21 03:34:43 INFO Job: map 100% reduce 0% 21/11/21 03:34:56 INFO Job: map 100% reduce 100% 21/11/21 03:34:57 INFO Job: Job job_1637424392789_0004 completed successfully 21/11/21 03:34:57 INFO Job: Counters: 56 File System Counters FILE: Number of bytes read=250272 FILE: Number of bytes written=942767 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=523138 HDFS: Number of bytes written=424085 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 HDFS: Number of bytes read erasure-coded=0 Job Counters Failed map tasks=1 Launched map tasks=2 Launched reduce tasks=1 Other local map tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=14569 Total time spent by all reduces in occupied slots (ms)=9859 Total time spent by all map tasks (ms)=14569 Total time spent by all reduce tasks (ms)=9859 Total vcore-milliseconds taken by all map tasks=14569 Total vcore-milliseconds taken by all reduce tasks=9859 Total megabyte-milliseconds taken by all map tasks=14918656 Total megabyte-milliseconds taken by all reduce tasks=10095616 Map-Reduce Framework Map input records=2449 Map output records=52916 Map output bytes=1005404 Map output materialized bytes=250274 Input split bytes=160 Combine input records=52916 Combine output records=30 Reduce input groups=30 Reduce shuffle bytes=250274 Reduce input records=30 Reduce output records=30 Spilled Records=60 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=227 CPU time spent (ms)=2000 Physical memory (bytes) snapshot=524152832 Virtual memory (bytes) snapshot=5090603008 Total committed heap usage (bytes)=365957120 Peak Map Physical memory (bytes)=385503232 Peak Map Virtual memory (bytes)=2539433984 Peak Reduce Physical memory (bytes)=138649600 Peak Reduce Virtual memory (bytes)=2551169024 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=522978 File Output Format Counters Bytes Written=424085 21/11/21 03:34:58 INFO AbstractJob: Command line arguments: {--endPhase=[2147483647], --excludeSelfSimilarity=[true], --input=[temp/preparePreferenceMatrix/ratingMatrix], --maxObservationsPerColumn=[500], --maxObservationsPerRow=[500], --maxSimilaritiesPerRow=[100], --numberOfColumns=[2449], --output=[temp/similarityMatrix], --randomSeed=[-9223372036854775808], --similarityClassname=[SIMILARITY_COCCURRENCE], --startPhase=[0], --tempDir=[temp], --threshold=[4.9E-324]} 21/11/21 03:34:58 INFO RMProxy: Connecting to ResourceManager at server01.hadoop.com/192.168.56.101:8032 21/11/21 03:34:58 INFO JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1637424392789_0005 21/11/21 03:35:00 INFO FileInputFormat: Total input files to process : 1 21/11/21 03:35:00 INFO JobSubmitter: number of splits:1 21/11/21 03:35:00 INFO JobSubmitter: Submitting tokens for job: job_1637424392789_0005 21/11/21 03:35:00 INFO JobSubmitter: Executing with tokens: [] 21/11/21 03:35:00 INFO YarnClientImpl: Submitted application application_1637424392789_0005 21/11/21 03:35:00 INFO Job: The url to track the job: http://server01.hadoop.com:8088/proxy/application_1637424392789_0005/ 21/11/21 03:35:00 INFO Job: Running job: job_1637424392789_0005 21/11/21 03:35:19 INFO Job: Job job_1637424392789_0005 running in uber mode : false 21/11/21 03:35:19 INFO Job: map 0% reduce 0% 21/11/21 03:35:32 INFO Job: map 100% reduce 0% 21/11/21 03:35:41 INFO Job: map 100% reduce 100% 21/11/21 03:35:42 INFO Job: Job job_1637424392789_0005 completed successfully 21/11/21 03:35:42 INFO Job: Counters: 54 File System Counters FILE: Number of bytes read=14503 FILE: Number of bytes written=471767 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=424246 HDFS: Number of bytes written=29494 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 HDFS: Number of bytes read erasure-coded=0 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=9709 Total time spent by all reduces in occupied slots (ms)=6592 Total time spent by all map tasks (ms)=9709 Total time spent by all reduce tasks (ms)=6592 Total vcore-milliseconds taken by all map tasks=9709 Total vcore-milliseconds taken by all reduce tasks=6592 Total megabyte-milliseconds taken by all map tasks=9942016 Total megabyte-milliseconds taken by all reduce tasks=6750208 Map-Reduce Framework Map input records=30 Map output records=1 Map output bytes=29396 Map output materialized bytes=14499 Input split bytes=161 Combine input records=1 Combine output records=1 Reduce input groups=1 Reduce shuffle bytes=14499 Reduce input records=1 Reduce output records=0 Spilled Records=2 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=276 CPU time spent (ms)=1520 Physical memory (bytes) snapshot=537358336 Virtual memory (bytes) snapshot=5092700160 Total committed heap usage (bytes)=365957120 Peak Map Physical memory (bytes)=398348288 Peak Map Virtual memory (bytes)=2539433984 Peak Reduce Physical memory (bytes)=139010048 Peak Reduce Virtual memory (bytes)=2553266176 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=424085 File Output Format Counters Bytes Written=98 21/11/21 03:35:42 INFO RMProxy: Connecting to ResourceManager at server01.hadoop.com/192.168.56.101:8032 21/11/21 03:35:42 INFO JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1637424392789_0006 21/11/21 03:35:43 INFO FileInputFormat: Total input files to process : 1 21/11/21 03:35:43 INFO JobSubmitter: number of splits:1 21/11/21 03:35:43 INFO JobSubmitter: Submitting tokens for job: job_1637424392789_0006 21/11/21 03:35:43 INFO JobSubmitter: Executing with tokens: [] 21/11/21 03:35:43 INFO YarnClientImpl: Submitted application application_1637424392789_0006 21/11/21 03:35:43 INFO Job: The url to track the job: http://server01.hadoop.com:8088/proxy/application_1637424392789_0006/ 21/11/21 03:35:43 INFO Job: Running job: job_1637424392789_0006 21/11/21 03:35:56 INFO Job: Job job_1637424392789_0006 running in uber mode : false 21/11/21 03:35:56 INFO Job: map 0% reduce 0% 21/11/21 03:36:10 INFO Job: Task Id : attempt_1637424392789_0006_m_000000_0, Status : FAILED Error: java.lang.IllegalStateException: java.lang.ClassNotFoundException: SIMILARITY_COCCURRENCE at org.apache.mahout.common.ClassUtils.instantiateAs(ClassUtils.java:30) at org.apache.mahout.math.hadoop.similarity.cooccurrence.RowSimilarityJob$VectorNormMapper.setup(RowSimilarityJob.java:270) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: java.lang.ClassNotFoundException: SIMILARITY_COCCURRENCE at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.mahout.common.ClassUtils.instantiateAs(ClassUtils.java:28) ... 9 more 21/11/21 03:36:17 INFO Job: Task Id : attempt_1637424392789_0006_m_000000_1, Status : FAILED Error: java.lang.IllegalStateException: java.lang.ClassNotFoundException: SIMILARITY_COCCURRENCE at org.apache.mahout.common.ClassUtils.instantiateAs(ClassUtils.java:30) at org.apache.mahout.math.hadoop.similarity.cooccurrence.RowSimilarityJob$VectorNormMapper.setup(RowSimilarityJob.java:270) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: java.lang.ClassNotFoundException: SIMILARITY_COCCURRENCE at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.mahout.common.ClassUtils.instantiateAs(ClassUtils.java:28) ... 9 more 21/11/21 03:36:24 INFO Job: Task Id : attempt_1637424392789_0006_m_000000_2, Status : FAILED Error: java.lang.IllegalStateException: java.lang.ClassNotFoundException: SIMILARITY_COCCURRENCE at org.apache.mahout.common.ClassUtils.instantiateAs(ClassUtils.java:30) at org.apache.mahout.math.hadoop.similarity.cooccurrence.RowSimilarityJob$VectorNormMapper.setup(RowSimilarityJob.java:270) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: java.lang.ClassNotFoundException: SIMILARITY_COCCURRENCE at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.mahout.common.ClassUtils.instantiateAs(ClassUtils.java:28) ... 9 more 21/11/21 03:36:31 INFO Job: map 100% reduce 100% 21/11/21 03:36:32 INFO Job: Job job_1637424392789_0006 failed with state FAILED due to: Task failed task_1637424392789_0006_m_000000 Job failed as tasks failed. failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0 21/11/21 03:36:33 INFO Job: Counters: 10 Job Counters Failed map tasks=4 Killed reduce tasks=1 Launched map tasks=4 Other local map tasks=3 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=27630 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=27630 Total vcore-milliseconds taken by all map tasks=27630 Total megabyte-milliseconds taken by all map tasks=28293120 21/11/21 03:36:33 INFO RMProxy: Connecting to ResourceManager at server01.hadoop.com/192.168.56.101:8032 21/11/21 03:36:33 INFO JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1637424392789_0007 21/11/21 03:36:34 INFO FileInputFormat: Total input files to process : 1 21/11/21 03:36:34 INFO JobSubmitter: Cleaning up the staging area /user/root/.staging/job_1637424392789_0007 Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://server01.hadoop.com:8020/user/root/temp/similarityMatrix at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:330) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:272) at org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:59) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:394) at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:200) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) at org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.run(RecommenderJob.java:249) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.main(RecommenderJob.java:335) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:152) at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227) [root@server02 pilot-pjt]#
- 미해결15일간의 빅데이터 파일럿 프로젝트
휴설치 에러 문의
선생님 안녕하세요! 다름이아니라, hue서비스 추가중 아래와같은 애러가 발생해서 문의 드립니다 ㅠ 클러스터 재시작 및 cpu를 많이 먹는 flume등을 제거해도 안되더라구여 ㅠ 휴가 안되면 그 이후것도 실습이 어려워서 꼭 답변 부탁드릴께요!
- 미해결15일간의 빅데이터 파일럿 프로젝트
분석-제플린을 이용한 실시간 분석,
7장. 빅데이터 분석 / 7.6 분석파일럿 4단계 - 제플린을 이용한 실시간 분석을 진행하고 있습니다. 제플린 노트북에서 그림 7.43의 스파크의 스칼라 코드를 작성하고 실행하는데 저에게는 오류가 똭----
- 미해결15일간의 빅데이터 파일럿 프로젝트
주제5 워크플로우 실행 에러
Subject 5 - Workflow 실행시 3번째 하이브 쿼리문에서 에러가 발생합니다. 로그는 2021-11-17 17:08:05,939 INFO org.apache.oozie.service.JPAService: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[] No results found 2021-11-17 17:08:06,056 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@:start:] Start action [0000000-211117134747764-oozie-oozi-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2021-11-17 17:08:06,089 INFO org.apache.oozie.action.control.StartActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@:start:] Starting action 2021-11-17 17:08:06,122 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@:start:] [***0000000-211117134747764-oozie-oozi-W@:start:***]Action status=DONE 2021-11-17 17:08:06,132 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@:start:] [***0000000-211117134747764-oozie-oozi-W@:start:***]Action updated in DB! 2021-11-17 17:08:07,005 INFO org.apache.oozie.action.control.StartActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@:start:] Action ended with external status [OK] 2021-11-17 17:08:07,619 INFO org.apache.oozie.service.JPAService: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@:start:] No results found 2021-11-17 17:08:07,684 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@:start:] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@:start: 2021-11-17 17:08:07,688 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W 2021-11-17 17:08:07,828 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] Start action [0000000-211117134747764-oozie-oozi-W@hive-537b] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2021-11-17 17:08:07,848 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] Starting action. Getting Action File System 2021-11-17 17:08:07,931 INFO org.apache.oozie.service.HadoopAccessorService: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] Processing configuration file [/var/run/cloudera-scm-agent/process/274-oozie-OOZIE_SERVER/action-conf/default.xml] for action [default] and hostPort [*] 2021-11-17 17:08:07,966 INFO org.apache.oozie.service.HadoopAccessorService: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] Processing configuration file [/var/run/cloudera-scm-agent/process/274-oozie-OOZIE_SERVER/action-conf/hive2.xml] for action [hive2] and hostPort [*] 2021-11-17 17:08:16,679 WARN org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] Invalid configuration value [null] defined for launcher max attempts count, using default [2]. 2021-11-17 17:08:16,703 INFO org.apache.oozie.action.hadoop.YarnACLHandler: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] Not setting ACLs because mapreduce.cluster.acls.enabled is set to false 2021-11-17 17:08:20,280 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] checking action, hadoop job ID [application_1637124309648_0001] status [RUNNING] 2021-11-17 17:08:20,294 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] [***0000000-211117134747764-oozie-oozi-W@hive-537b***]Action status=RUNNING 2021-11-17 17:08:20,298 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] [***0000000-211117134747764-oozie-oozi-W@hive-537b***]Action updated in DB! 2021-11-17 17:08:20,312 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@hive-537b 2021-11-17 17:08:41,461 INFO org.apache.oozie.servlet.CallbackServlet: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] callback for action [0000000-211117134747764-oozie-oozi-W@hive-537b] 2021-11-17 17:08:41,990 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] action completed, external ID [application_1637124309648_0001] 2021-11-17 17:08:42,046 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] Action ended with external status [SUCCEEDED] 2021-11-17 17:08:42,469 INFO org.apache.oozie.service.JPAService: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] No results found 2021-11-17 17:08:42,566 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] Start action [0000000-211117134747764-oozie-oozi-W@hive-6e14] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2021-11-17 17:08:42,592 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] Starting action. Getting Action File System 2021-11-17 17:08:47,110 WARN org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] Invalid configuration value [null] defined for launcher max attempts count, using default [2]. 2021-11-17 17:08:47,113 INFO org.apache.oozie.action.hadoop.YarnACLHandler: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] Not setting ACLs because mapreduce.cluster.acls.enabled is set to false 2021-11-17 17:08:48,576 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] checking action, hadoop job ID [application_1637124309648_0002] status [RUNNING] 2021-11-17 17:08:48,585 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] [***0000000-211117134747764-oozie-oozi-W@hive-6e14***]Action status=RUNNING 2021-11-17 17:08:48,587 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] [***0000000-211117134747764-oozie-oozi-W@hive-6e14***]Action updated in DB! 2021-11-17 17:08:48,601 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@hive-6e14 2021-11-17 17:08:48,603 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-537b] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@hive-537b 2021-11-17 17:10:36,799 INFO org.apache.oozie.servlet.CallbackServlet: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] callback for action [0000000-211117134747764-oozie-oozi-W@hive-6e14] 2021-11-17 17:10:37,091 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] External Child IDs : [job_1637124309648_0003] 2021-11-17 17:10:37,099 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] action completed, external ID [application_1637124309648_0002] 2021-11-17 17:10:37,175 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] Action ended with external status [SUCCEEDED] 2021-11-17 17:10:37,296 INFO org.apache.oozie.service.JPAService: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] No results found 2021-11-17 17:10:37,408 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] Start action [0000000-211117134747764-oozie-oozi-W@hive-6e91] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2021-11-17 17:10:37,425 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] Starting action. Getting Action File System 2021-11-17 17:10:42,219 WARN org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] Invalid configuration value [null] defined for launcher max attempts count, using default [2]. 2021-11-17 17:10:42,222 INFO org.apache.oozie.action.hadoop.YarnACLHandler: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] Not setting ACLs because mapreduce.cluster.acls.enabled is set to false 2021-11-17 17:10:44,054 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] checking action, hadoop job ID [application_1637124309648_0004] status [RUNNING] 2021-11-17 17:10:44,060 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] [***0000000-211117134747764-oozie-oozi-W@hive-6e91***]Action status=RUNNING 2021-11-17 17:10:44,060 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] [***0000000-211117134747764-oozie-oozi-W@hive-6e91***]Action updated in DB! 2021-11-17 17:10:44,071 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@hive-6e91 2021-11-17 17:10:44,073 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e14] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@hive-6e14 2021-11-17 17:10:58,958 INFO org.apache.oozie.servlet.CallbackServlet: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] callback for action [0000000-211117134747764-oozie-oozi-W@hive-6e91] 2021-11-17 17:10:59,128 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] action completed, external ID [application_1637124309648_0004] 2021-11-17 17:10:59,132 WARN org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] Launcher ERROR, reason: Main Class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2] 2021-11-17 17:10:59,174 INFO org.apache.oozie.action.hadoop.Hive2ActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] Action ended with external status [FAILED/KILLED] 2021-11-17 17:10:59,186 INFO org.apache.oozie.command.wf.ActionEndXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] ERROR is considered as FAILED for SLA 2021-11-17 17:10:59,279 INFO org.apache.oozie.service.JPAService: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] No results found 2021-11-17 17:10:59,329 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@Kill] Start action [0000000-211117134747764-oozie-oozi-W@Kill] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2021-11-17 17:10:59,342 INFO org.apache.oozie.action.control.KillActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@Kill] Starting action 2021-11-17 17:10:59,752 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@Kill] [***0000000-211117134747764-oozie-oozi-W@Kill***]Action status=DONE 2021-11-17 17:10:59,754 INFO org.apache.oozie.command.wf.ActionStartXCommand: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@Kill] [***0000000-211117134747764-oozie-oozi-W@Kill***]Action updated in DB! 2021-11-17 17:10:59,780 INFO org.apache.oozie.action.control.KillActionExecutor: SERVER[server02.hadoop.com] USER[admin] GROUP[-] TOKEN[] APP[Subject 5 - Workflow] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@Kill] Action ended with external status [OK] 2021-11-17 17:10:59,925 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@Kill] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@Kill 2021-11-17 17:10:59,927 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W 2021-11-17 17:10:59,927 INFO org.apache.oozie.command.wf.WorkflowNotificationXCommand: SERVER[server02.hadoop.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000000-211117134747764-oozie-oozi-W] ACTION[0000000-211117134747764-oozie-oozi-W@hive-6e91] No Notification URL is defined. Therefore nothing to notify for job 0000000-211117134747764-oozie-oozi-W@hive-6e91 Log Type: prelaunch.err Log Upload Time: Wed Nov 17 17:11:00 +0900 2021 Log Length: 0 Log Type: prelaunch.out Log Upload Time: Wed Nov 17 17:11:00 +0900 2021 Log Length: 70 Setting up env variables Setting up job resources Launching container Log Type: stderr Log Upload Time: Wed Nov 17 17:11:00 +0900 2021 Log Length: 4561 Showing 4096 bytes of 4561 total. Click here for the full log. aticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging. Connecting to jdbc:hive2://server02.hadoop.com:10000/default Connected to: Apache Hive (version 2.1.1-cdh6.3.2) Driver: Hive JDBC (version 2.1.1-cdh6.3.2) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://server02.hadoop.com:10000/def> USE default; INFO : Compiling command(queryId=hive_20211117171057_9409a0f0-f0c7-4196-a475-a9290b9eccdb): USE default INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20211117171057_9409a0f0-f0c7-4196-a475-a9290b9eccdb); Time taken: 0.338 seconds INFO : Executing command(queryId=hive_20211117171057_9409a0f0-f0c7-4196-a475-a9290b9eccdb): USE default INFO : Starting task [Stage-0:DDL] in serial mode INFO : Completed executing command(queryId=hive_20211117171057_9409a0f0-f0c7-4196-a475-a9290b9eccdb); Time taken: 0.016 seconds INFO : OK No rows affected (0.542 seconds) 0: jdbc:hive2://server02.hadoop.com:10000/def> 0: jdbc:hive2://server02.hadoop.com:10000/def> insert overwrite local directory '/home/pilot-pjt/item-buy-list' . . . . . . . . . . . . . . . . . . . . . . .> ROW FORMAT DELIMITED . . . . . . . . . . . . . . . . . . . . . . .> FIELDS TERMINATED BY ',' . . . . . . . . . . . . . . . . . . . . . . .> select car_number, concat_ws("," , collect_set(item)) . . . . . . . . . . . . . . . . . . . . . . .> from managed_smartcar_item_buylis t_info . . . . . . . . . . . . . . . . . . . . . . .> group by car_number . . . . . . . . . . . . . . . . . . . . . . .> Error: Error while compiling statement: FAILED: RuntimeException Cannot create staging directory 'hdfs://server01.hadoop.com:8020/home/pilot-pjt/item-buy-list/.hive-staging_hive_2021-11-17_17-10-57_746_3560942864087896700-3': Permission denied: user=admin, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798) at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:61) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) (state=42000,code=40000) Closing: 0: jdbc:hive2://server02.hadoop.com:10000/default Log Type: stdout Log Upload Time: Wed Nov 17 17:11:00 +0900 2021 Log Length: 198282 Showing 4096 bytes of 198282 total. Click here for the full log. launch_container.sh jetty-jndi-9.3.25.v20180904.jar jersey-container-servlet-core-2.25.1.jar datanucleus-core-4.1.6.jar asm-tree-6.0.jar ------------------------ Script [hive-6e91.sql] content: ------------------------ USE default; insert overwrite local directory '/home/pilot-pjt/item-buy-list' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' select car_number, concat_ws("," , collect_set(item)) from managed_smartcar_item_buylist_info group by car_number ------------------------ Beeline command arguments : -u jdbc:hive2://server02.hadoop.com:10000/default -n admin -p DUMMY -d org.apache.hive.jdbc.HiveDriver -f hive-6e91.sql -a delegationToken --hiveconf mapreduce.job.tags=oozie-418ffadb1764d24a19ec7f0c02056574 --hiveconf oozie.action.id=0000000-211117134747764-oozie-oozi-W@hive-6e91 --hiveconf oozie.child.mapreduce.job.tags=oozie-418ffadb1764d24a19ec7f0c02056574 --hiveconf oozie.action.rootlogger.log.level=INFO --hiveconf oozie.job.id=0000000-211117134747764-oozie-oozi-W --hiveconf oozie.HadoopAccessorService.created=true Fetching child yarn jobs tag id : oozie-418ffadb1764d24a19ec7f0c02056574 No child applications found ================================================================= >>> Invoking Beeline command line now >>> <<< Invocation of Beeline command completed <<< No child hadoop job is executed. java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.oozie.action.hadoop.LauncherAM.runActionMain(LauncherAM.java:410) at org.apache.oozie.action.hadoop.LauncherAM.access$300(LauncherAM.java:55) at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:223) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217) at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141) Caused by: java.lang.SecurityException: Intercepted System.exit(2) at org.apache.oozie.action.hadoop.security.LauncherSecurityManager.checkExit(LauncherSecurityManager.java:57) at java.lang.Runtime.exit(Runtime.java:107) at java.lang.System.exit(System.java:971) at org.apache.oozie.action.hadoop.Hive2Main.runBeeline(Hive2Main.java:273) at org.apache.oozie.action.hadoop.Hive2Main.run(Hive2Main.java:250) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:104) at org.apache.oozie.action.hadoop.Hive2Main.main(Hive2Main.java:65) ... 16 more Intercepting System.exit(2) Failing Oozie Launcher, Main Class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2] Oozie Launcher, uploading action data to HDFS sequence file: hdfs://server01.hadoop.com:8020/user/admin/oozie-oozi/0000000-211117134747764-oozie-oozi-W/hive-6e91--hive2/action-data.seq Stopping AM Callback notification attempts left 0 Callback notification trying http://server02.hadoop.com:11000/oozie/callback?id=0000000-211117134747764-oozie-oozi-W@hive-6e91&status=FAILED Callback notification to http://server02.hadoop.com:11000/oozie/callback?id=0000000-211117134747764-oozie-oozi-W@hive-6e91&status=FAILED succeeded Callback notification succeeded 이렇습니다. 무슨 문제일까요 ㅜㅜ 다른분 게시물에 나와있는 디렉토리 만들고 권한 변경은 이미 진행해보았지만 변화가 없었습니다.
- 미해결15일간의 빅데이터 파일럿 프로젝트
탐색-Hue에서 쿼리를 실행한 결과화면에서 테이블명.필드명으로
6장. 빅데이터탐색 6.6 탐색파일럿4단계 - 데이터 탐색 기능 구현 및 테스트를 진행하고 있습니다. 주제별 workflow를 작성하고 실행한후 생성된 테이블의 내용을 쿼리로 결과를 확인하는 부분에서 결과를 확인하는 화면에 테이블명과 필드명이 같이 구성되어 나옵니다. 선생님의 강의결과에서는 필드명만 보이는데 *** 그림 6.101 (교재 283페이지) *** 어떤설정에 의해 테이블명을 안보이게 할 수 있을까요?
- 미해결15일간의 빅데이터 파일럿 프로젝트
혹시 서버 3까지 가상서버 이미지를 공유해주실 생각 없으신가요??
3대를 이용하여 학습을 하고 싶은데 서버 3까지 가상서버 이미지를 공유해주실 생각 없으신가요??
- 미해결15일간의 빅데이터 파일럿 프로젝트
호스트 상태 불량 문제
안녕하세요! CM으로 클러스터 구성하였는데 이렇게 호스트 두개다 불량상태로 뜹니다ㅜㅜ 이런경우 어떻게 해야할까요?ㅜㅜ 제가 물리적 메모리 같은 것을 늘려야하나요ㅜㅜ 매일 조금씩 따라하고있는데 쉽지않네요 도움주시면 감사하겠습니다!
- 미해결15일간의 빅데이터 파일럿 프로젝트
virtual box 동시에 두 서버 실행
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. 안녕하세요! 현재 환경 구성을 하고 있는데 server01을 실행하면서 동시에 server2가 실행이 안됩니다.(server2를 실행 하면 server1은 실행이 안됩니다) 오류메세지는 다음과 같습니다 ↓↓↓ 처음에는 한 서버도 실행이 안되서 최신 버전 6.1.28 부터 5.1.38, 5.2.44 사용해보았고 (현재 5.2.44 사용중) 확장팩 다운 후 연결, hyper-v 기능 off, 경로 문제일까봐 C드라이브에서만 실행 등 구글링해서 여러 시도해봤는데 두가지 서버가 실행이 안되서 질문드립니다. 어떻게 해결할 수 있을까요??( 16g 램, amd-ryzen 5 3500x 6-core사용중)
- 미해결15일간의 빅데이터 파일럿 프로젝트
클라우데라매니저 에러
안녕하세요 4.VM환경구성을 따라하던 중 CM접속을 하고 평가판 선택을 한다음, 클러스터 구성창으로 넘어가지 않습니다. 아래화면에서 계속 골벵이가 돌며 넘어가지않네요 CM로그를 확인해보니 에러가 발생하는데 구글링을해도 감이 잡히지않네요.. 도움 부탁드리겠습니다ㅜㅜ (에러메시지는 아래 사진에 첨부해두었습니다.)
- 미해결15일간의 빅데이터 파일럿 프로젝트
4.VM 통합 환경 구성 실습하고 있던 중 질문 드립니다
- 학습 관련 질문을 남겨주세요. 상세히 작성하면 더 좋아요! - 먼저 유사한 질문이 있었는지 검색해보세요. - 서로 예의를 지키며 존중하는 문화를 만들어가요. - 잠깐! 인프런 서비스 운영 관련 문의는 1:1 문의하기를 이용해주세요. http://server01.hadoop.com:7180/ 접속했더니 다음과 같이 평가판이 만료되었다고 표시되네요 클라우데라 재설치해야 되는건가요?