中文字幕日韩精品一区二区免费_精品一区二区三区国产精品无卡在_国精品无码专区一区二区三区_国产αv三级中文在线

Ambari2.7.5安裝Flink1.14-創(chuàng)新互聯(lián)

文章目錄
    • 下載Flink配置安裝源
    • 下載ambari-flink-service服務(wù)
    • 修改配置文件
    • 創(chuàng)建用戶和組
    • 重啟Ambari
    • 登錄Ambari安裝Flink
    • 提交Flink任務(wù)
      • Flink 直接單獨(dú)提交到 On Yarn
      • 指定Flink在Yarn跑的容器運(yùn)行Flink
    • 異常
      • 異常1
      • 異常2
      • 異常3

創(chuàng)新互聯(lián)公司是一家集網(wǎng)站建設(shè),伽師企業(yè)網(wǎng)站建設(shè),伽師品牌網(wǎng)站建設(shè),網(wǎng)站定制,伽師網(wǎng)站建設(shè)報(bào)價(jià),網(wǎng)絡(luò)營(yíng)銷,網(wǎng)絡(luò)優(yōu)化,伽師網(wǎng)站推廣為一體的創(chuàng)新建站企業(yè),幫助傳統(tǒng)企業(yè)提升企業(yè)形象加強(qiáng)企業(yè)競(jìng)爭(zhēng)力。可充分滿足這一群體相比中小企業(yè)更為豐富、高端、多元的互聯(lián)網(wǎng)需求。同時(shí)我們時(shí)刻保持專業(yè)、時(shí)尚、前沿,時(shí)刻以成就客戶成長(zhǎng)自我,堅(jiān)持不斷學(xué)習(xí)、思考、沉淀、凈化自己,讓我們?yōu)楦嗟钠髽I(yè)打造出實(shí)用型網(wǎng)站。下載Flink配置安裝源
wget https://archive.apache.org/dist/flink/flink-1.14.2/flink-1.14.2-bin-scala_2.12.tgz
wget https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
mkdir  /var/www/html/flink
mv flink-1.14.2-bin-scala_2.12.tgz /var/www/html/flink/
mv flink-shaded-hadoop-2-uber-2.8.3-10.0.jar /var/www/html/flink/

所有穩(wěn)定版本都可以在https://archive.apache.org/dist/flink/獲取

下載ambari-flink-service服務(wù)
  • 查看HDP版本
[root@bigdata ~]# VERSION=`hdp-select status hadoop-client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/'`
[root@bigdata~]# echo $VERSION
3.1
  • 下載ambari-flink-service服務(wù)到 ambari-server 資源目錄下
# git clone https://github.com/abajwa-hw/ambari-flink-service.git   /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK

Cloning into '/var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK'...
remote: Enumerating objects: 198, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 198 (delta 0), reused 3 (delta 0), pack-reused 192
Receiving objects: 100% (198/198), 2.09 MiB | 982.00 KiB/s, done.
Resolving deltas: 100% (89/89), done.
  • 查看是否clone成功
# ll /var/lib/ambari-server/resources/stacks/HDP/$VERSION/services/FLINK
total 32
drwxr-xr-x 2 root root 4096 Jan 13 14:01 configuration
-rw-r--r-- 1 root root  223 Jan 13 14:01 kerberos.json
-rw-r--r-- 1 root root 1777 Jan 13 14:01 metainfo.xml
drwxr-xr-x 3 root root 4096 Jan 13 14:01 package
-rwxr-xr-x 1 root root 8114 Jan 13 14:01 README.md
-rw-r--r-- 1 root root  125 Jan 13 14:01 role_command_order.json
drwxr-xr-x 2 root root 4096 Jan 13 14:01 screenshots
修改配置文件

修改配置文件

/var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/metainfo.xml

FLINKFlink1.14.2

修改JAVA_HOME

vim /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration/flink-env.xml

env.java.home: /opt/jdk1.8.0_151

修改flink-ambari-config.xml

修改下載地址(地址為我們自己的httpd服務(wù)中放置的安裝包)

vim /var/lib/ambari-server/resources/stacks/HDP/3.1/services/FLINK/configuration/flink-ambari-config.xml

flink_download_urlhttp://172.16.24.194/flink/flink-1.14.2-bin-scala_2.12.tgzSnapshot download location. Downloaded when setup_prebuilt is trueflink_hadoop_shaded_jarhttp://172.16.24.194/flink/flink-shaded-hadoop-2-uber-2.8.3-10.0.jarFlink shaded hadoop jar download location. Downloaded when setup_prebuilt is true
創(chuàng)建用戶和組
groupadd flink
useradd  -d /home/flink  -g flink flink
重啟Ambari
ambari-server restart
登錄Ambari安裝Flink

在這里插入圖片描述

選擇Flink需要安裝到哪臺(tái)服務(wù)器

在這里插入圖片描述

配置 Flink on yarn 故障轉(zhuǎn)移方式

Custom flink-env中添加

yarn.client.failover-proxy-providerorg.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider

在這里插入圖片描述

點(diǎn)擊Next

在這里插入圖片描述

點(diǎn)擊DEPLOY

在這里插入圖片描述

安裝成功

在這里插入圖片描述

在Linux服務(wù)器中配置flink環(huán)境變量

ln -s /opt/flink/bin/flink /usr/bin/flink
提交Flink任務(wù) Flink 直接單獨(dú)提交到 On Yarn

參數(shù)-m yarn-cluster

/opt/flink/bin/flink run \
-m yarn-cluster \
-p $P  \
-ys $YS \
-yjm $YJM \
-ytm $YTM \
-yt $JAR_PATH/lib \
-ynm $YNM \
-yD env.java.opts="-Dfile.encoding=UTF-8" \
-c $START_CLASS \
$JAR_PATH/$JAR_NAME.jar $1
指定Flink在Yarn跑的容器運(yùn)行Flink

啟動(dòng)Flink后可以在yarn的ui上看到一個(gè)應(yīng)用(應(yīng)用有一個(gè)appId)

-yid application_1673426410002_0013指定

/opt/flink/bin/flink run \
-yid application_1673426410002_0013 \
-ys $YS \
-yjm $YJM \
-ytm $YTM \
-yt $JAR_PATH/lib \
-ynm $YNM \
-yD env.java.opts="-Dfile.encoding=UTF-8" \
-c $START_CLASS \
$JAR_PATH/$JAR_NAME.jar $1

注意:

通過(guò)-yt $JAR_PATH/lib指定使用的jar包,通過(guò) -yid 提交可能會(huì)找不到Jar包

異常 異常1
stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 38, inBeforeAnyHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 31, in hook
    setup_users()
  File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 50, in setup_users
    groups = params.user_to_groups_dict[user],
KeyError: u'flink'
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-852.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-852.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
 stdout:
2023-01-13 14:22:09,665 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None ->3.1
2023-01-13 14:22:09,676 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2023-01-13 14:22:09,679 - Group['flink'] {}
2023-01-13 14:22:09,682 - Group['livy'] {}
2023-01-13 14:22:09,682 - Group['spark'] {}
2023-01-13 14:22:09,683 - Group['hdfs'] {}
2023-01-13 14:22:09,684 - Group['hadoop'] {}
2023-01-13 14:22:09,684 - Group['users'] {}
2023-01-13 14:22:09,685 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2023-01-13 14:22:09,687 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2023-01-13 14:22:09,689 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2023-01-13 14:22:09,691 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2023-01-13 14:22:09,693 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-852.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-852.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2023-01-13 14:22:09,738 - The repository with version 3.1.5.0-152 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2023-01-13 14:22:09,750 - Skipping stack-select on FLINK because it does not exist in the stack-select package structure.

Command failed after 1 tries

解決方法:

#python configs.py -u admin -p admin -n $cluster_name -l $ambari_server -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v ture

>cd /var/lib/ambari-server/resources/scripts
>python configs.py -u admin -p admin -n dev -l 172.16.24.194 -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v ture
2023-01-13 14:26:26,174 INFO ### Performing "set":
2023-01-13 14:26:26,175 INFO ### new property - "ignore_groupsusers_create":"ture"
2023-01-13 14:26:26,290 INFO ### on (Site:cluster-env, Tag:5553e181-525d-45c8-bc15-f6bdfcce607f)
2023-01-13 14:26:26,320 INFO ### PUTting json into: doSet_version1673591186320343.json
2023-01-13 14:26:26,646 INFO ### NEW Site:cluster-env, Tag:version1673591186320343
異常2
stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 172, inMaster().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 119, in start
    Execute (cmd + format(">>{flink_log_file}"), user=params.flink_user)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
    returns=self.resource.returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'export HADOOP_CONF_DIR=/etc/hadoop/conf; export HADOOP_CLASSPATH=/usr/hdp/3.1.5.0-152/hadoop/conf:/usr/hdp/3.1.5.0-152/hadoop/lib/*:/usr/hdp/3.1.5.0-152/hadoop/.//*:/usr/hdp/3.1.5.0-152/hadoop-hdfs/./:/usr/hdp/3.1.5.0-152/hadoop-hdfs/lib/*:/usr/hdp/3.1.5.0-152/hadoop-hdfs/.//*:/usr/hdp/3.1.5.0-152/hadoop-mapreduce/lib/*:/usr/hdp/3.1.5.0-152/hadoop-mapreduce/.//*:/usr/hdp/3.1.5.0-152/hadoop-yarn/./:/usr/hdp/3.1.5.0-152/hadoop-yarn/lib/*:/usr/hdp/3.1.5.0-152/hadoop-yarn/.//*:/usr/hdp/3.1.5.0-152/tez/*:/usr/hdp/3.1.5.0-152/tez/lib/*:/usr/hdp/3.1.5.0-152/tez/conf:/usr/hdp/3.1.5.0-152/tez/conf_llap:/usr/hdp/3.1.5.0-152/tez/doc:/usr/hdp/3.1.5.0-152/tez/hadoop-shim-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/hadoop-shim-2.8-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib:/usr/hdp/3.1.5.0-152/tez/man:/usr/hdp/3.1.5.0-152/tez/tez-api-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-common-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-dag-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-examples-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-history-parser-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-javadoc-tools-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-job-analyzer-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-mapreduce-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-protobuf-history-plugin-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-runtime-internals-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-runtime-library-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-tests-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-history-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/ui:/usr/hdp/3.1.5.0-152/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.5.0-152/tez/lib/gcs-connector-hadoop3-1.9.17.3.1.5.0-152-shaded.jar:/usr/hdp/3.1.5.0-152/tez/lib/guava-28.0-jre.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-aws-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-azure-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-azure-datalake-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-hdfs-client-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.5.0-152/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.5.0-152/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.5.0-152/tez/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.5.0-152/tez/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.5.0-152/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.5.0-152/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.5.0-152/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.5.0-152/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.5.0-152/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.5.0-152/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.5.0-152/tez/lib/tez.tar.gz; /opt/flink/bin/yarn-session.sh -d -nm flinkapp-from-ambari -n 1 -s 1 -jm 768 -tm 1024 -qu default >>/var/log/flink/flink-setup.log' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/flink/lib/log4j-slf4j-impl-2.16.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.5.0-152/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.configuration.IllegalConfigurationException: JobManager memory configuration failed: Sum of configured JVM Metaspace (256.000mb (268435456 bytes)) and JVM Overhead (192.000mb (201326592 bytes)) exceed configured Total Process Memory (256.000mb (268435456 bytes)).
	at org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:78)
	at org.apache.flink.client.deployment.AbstractContainerizedClusterClientFactory.getClusterSpecification(AbstractContainerizedClusterClientFactory.java:43)
	at org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:602)
	at org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$4(FlinkYarnSessionCli.java:860)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
	at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
	at org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:860)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Sum of configured JVM Metaspace (256.000mb (268435456 bytes)) and JVM Overhead (192.000mb (201326592 bytes)) exceed configured Total Process Memory (256.000mb (268435456 bytes)).
	at org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveJvmMetaspaceAndOverheadWithTotalProcessMemory(ProcessMemoryUtils.java:157)
	at org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.deriveProcessSpecWithTotalProcessMemory(ProcessMemoryUtils.java:114)
	at org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:84)
	at org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfig(JobManagerProcessUtils.java:83)
	at org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:73)
	... 8 more
 stdout:
2023-01-13 14:27:05,019 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.5.0-152 ->3.1.5.0-152
2023-01-13 14:27:05,062 - Using hadoop conf dir: /usr/hdp/3.1.5.0-152/hadoop/conf
2023-01-13 14:27:05,535 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.5.0-152 ->3.1.5.0-152
2023-01-13 14:27:05,548 - Using hadoop conf dir: /usr/hdp/3.1.5.0-152/hadoop/conf
2023-01-13 14:27:05,552 - Skipping creation of User and Group as host is sys prepped or ignore_groupsusers_create flag is on
2023-01-13 14:27:05,553 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2023-01-13 14:27:05,559 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2023-01-13 14:27:05,562 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2023-01-13 14:27:05,564 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2023-01-13 14:27:05,592 - call returned (0, '1014')
2023-01-13 14:27:05,594 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1014'] {'not_if': '(test $(id -u hbase) -gt 1000) || (ture)'}
2023-01-13 14:27:05,610 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1014'] due to not_if
2023-01-13 14:27:05,611 - Skipping setting dfs cluster admin and tez view acls as host is sys prepped
2023-01-13 14:27:05,611 - FS Type: HDFS
2023-01-13 14:27:05,612 - Directory['/etc/hadoop'] {'mode': 0755}
2023-01-13 14:27:05,653 - File['/usr/hdp/3.1.5.0-152/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2023-01-13 14:27:05,655 - Writing File['/usr/hdp/3.1.5.0-152/hadoop/conf/hadoop-env.sh'] because contents don't match
2023-01-13 14:27:05,656 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2023-01-13 14:27:05,699 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2023-01-13 14:27:05,721 - Skipping Execute[('setenforce', '0')] due to not_if
2023-01-13 14:27:05,722 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2023-01-13 14:27:05,730 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2023-01-13 14:27:05,732 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2023-01-13 14:27:05,733 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2023-01-13 14:27:05,744 - File['/usr/hdp/3.1.5.0-152/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2023-01-13 14:27:05,749 - File['/usr/hdp/3.1.5.0-152/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2023-01-13 14:27:05,763 - File['/usr/hdp/3.1.5.0-152/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2023-01-13 14:27:05,791 - File['/usr/hdp/3.1.5.0-152/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2023-01-13 14:27:05,793 - File['/usr/hdp/3.1.5.0-152/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2023-01-13 14:27:05,795 - File['/usr/hdp/3.1.5.0-152/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2023-01-13 14:27:05,806 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2023-01-13 14:27:05,819 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2023-01-13 14:27:05,831 - Skipping unlimited key JCE policy check and setup since the Java VM is not managed by Ambari
2023-01-13 14:27:05,851 - Skipping stack-select on FLINK because it does not exist in the stack-select package structure.
2023-01-13 14:27:06,350 - File['/opt/flink/conf/flink-conf.yaml'] {'owner': 'flink', 'content': InlineTemplate(...)}
2023-01-13 14:27:06,354 - Writing File['/opt/flink/conf/flink-conf.yaml'] because contents don't match
2023-01-13 14:27:06,356 - Execute['hadoop fs -mkdir -p /user/flink'] {'ignore_failures': True, 'user': 'hdfs'}
2023-01-13 14:27:10,231 - Execute['hadoop fs -chown flink /user/flink'] {'user': 'hdfs'}
2023-01-13 14:27:13,858 - Execute['hadoop fs -chgrp flink /user/flink'] {'user': 'hdfs'}
2023-01-13 14:27:17,779 - Execute['echo bin dir /opt/flink/bin'] {}
2023-01-13 14:27:17,789 - Execute['echo pid file /var/run/flink/flink.pid'] {}
2023-01-13 14:27:18,005 - Execute['export HADOOP_CONF_DIR=/etc/hadoop/conf; export HADOOP_CLASSPATH=/usr/hdp/3.1.5.0-152/hadoop/conf:/usr/hdp/3.1.5.0-152/hadoop/lib/*:/usr/hdp/3.1.5.0-152/hadoop/.//*:/usr/hdp/3.1.5.0-152/hadoop-hdfs/./:/usr/hdp/3.1.5.0-152/hadoop-hdfs/lib/*:/usr/hdp/3.1.5.0-152/hadoop-hdfs/.//*:/usr/hdp/3.1.5.0-152/hadoop-mapreduce/lib/*:/usr/hdp/3.1.5.0-152/hadoop-mapreduce/.//*:/usr/hdp/3.1.5.0-152/hadoop-yarn/./:/usr/hdp/3.1.5.0-152/hadoop-yarn/lib/*:/usr/hdp/3.1.5.0-152/hadoop-yarn/.//*:/usr/hdp/3.1.5.0-152/tez/*:/usr/hdp/3.1.5.0-152/tez/lib/*:/usr/hdp/3.1.5.0-152/tez/conf:/usr/hdp/3.1.5.0-152/tez/conf_llap:/usr/hdp/3.1.5.0-152/tez/doc:/usr/hdp/3.1.5.0-152/tez/hadoop-shim-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/hadoop-shim-2.8-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib:/usr/hdp/3.1.5.0-152/tez/man:/usr/hdp/3.1.5.0-152/tez/tez-api-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-common-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-dag-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-examples-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-history-parser-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-javadoc-tools-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-job-analyzer-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-mapreduce-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-protobuf-history-plugin-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-runtime-internals-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-runtime-library-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-tests-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-history-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/ui:/usr/hdp/3.1.5.0-152/tez/lib/async-http-client-1.9.40.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-cli-1.2.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-codec-1.4.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-collections4-4.1.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-io-2.4.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-lang-2.6.jar:/usr/hdp/3.1.5.0-152/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.5.0-152/tez/lib/gcs-connector-hadoop3-1.9.17.3.1.5.0-152-shaded.jar:/usr/hdp/3.1.5.0-152/tez/lib/guava-28.0-jre.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-aws-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-azure-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-azure-datalake-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-hdfs-client-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.5.0-152.jar:/usr/hdp/3.1.5.0-152/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.5.0-152/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.5.0-152/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.5.0-152/tez/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.5.0-152/tez/lib/jetty-util-9.3.24.v20180605.jar:/usr/hdp/3.1.5.0-152/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.5.0-152/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.5.0-152/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.5.0-152/tez/lib/RoaringBitmap-0.4.9.jar:/usr/hdp/3.1.5.0-152/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.5.0-152/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.5.0-152/tez/lib/tez.tar.gz; /opt/flink/bin/yarn-session.sh -d -nm flinkapp-from-ambari -n 1 -s 1 -jm 768 -tm 1024 -qu default >>/var/log/flink/flink-setup.log'] {'user': 'flink'}
2023-01-13 14:27:22,136 - Skipping stack-select on FLINK because it does not exist in the stack-select package structure.

Command failed after 1 tries

解決方法:

在flink-evn.xml中添加

jobmanager.memory.process.size: 1600m
taskmanager.memory.process.size: 1728m

在這里插入圖片描述

保存后重啟Flink

異常3
stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 172, inMaster().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 980, in restart
    self.stop(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.1/services/FLINK/package/scripts/flink.py", line 98, in stop
    pid = str(sudo.read_file(status_params.flink_pid_file))
  File "/usr/lib/ambari-agent/lib/resource_management/core/sudo.py", line 151, in read_file
    with open(filename, "rb") as fp:
IOError: [Errno 2] No such file or directory: u'/var/run/flink/flink.pid'

下載javax.ws.rs-api-2.0.jar放到/opt/flink/lib目錄下

cd /opt/flink/lib
wget https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.0/javax.ws.rs-api-2.0.jar
chown flink.flink javax.ws.rs-api-2.0.jar

解決方法

su hdfs
cd /opt/flink/bin
./yarn-session.sh -n 1 -s 1 -jm 768 -tm 1024 -qu default -nm flinkapp-from-ambari -d >>/var/log/hadoop/hdfs/flink-setup.log

參考:

https://blog.csdn.net/qq_36048223/article/details/116114765

你是否還在尋找穩(wěn)定的海外服務(wù)器提供商?創(chuàng)新互聯(lián)www.cdcxhl.cn海外機(jī)房具備T級(jí)流量清洗系統(tǒng)配攻擊溯源,準(zhǔn)確流量調(diào)度確保服務(wù)器高可用性,企業(yè)級(jí)服務(wù)器適合批量采購(gòu),新人活動(dòng)首月15元起,快前往官網(wǎng)查看詳情吧

本文題目:Ambari2.7.5安裝Flink1.14-創(chuàng)新互聯(lián)
文章分享:http://m.rwnh.cn/article44/cegche.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供小程序開(kāi)發(fā)、全網(wǎng)營(yíng)銷推廣外貿(mào)建站、標(biāo)簽優(yōu)化關(guān)鍵詞優(yōu)化、手機(jī)網(wǎng)站建設(shè)

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)

成都網(wǎng)頁(yè)設(shè)計(jì)公司
泽州县| 忻州市| 中卫市| 静宁县| 彰化市| 常宁市| 博兴县| 兴安县| 雷波县| 西宁市| 尼勒克县| 满洲里市| 广昌县| 桦南县| 鸡东县| 沾益县| 仁布县| 慈溪市| 阿合奇县| 和田市| 北京市| 土默特右旗| 灵丘县| 云阳县| 聂荣县| 惠东县| 淳化县| 惠州市| 阜城县| 天长市| 高雄县| 璧山县| 元朗区| 华容县| 泸水县| 两当县| 嘉义市| 巴南区| 寻乌县| 工布江达县| 庆云县|