请问,为什么在"先生您好请问需要什么Mr."这个单词里,m的后面明明没有i但是依旧发"咪"的音读作"咪思的"呢?

& Topic知识点 & “单词拼写(共15小题,每小题1分,共15...”习题详情
111位同学学习过此题,做题成功率60.3%
单词拼写(共15小题,每小题1分,共15分)A).根据句意和汉语提示,在空白处填入适当的单词【小题1】I’m hungry& (饥饿的), Mum. Can I have a snack?【小题2】Smoking is bad for your health& (健康).【小题3】Do you have enough& (足够的) time to finish the work?【小题4】—Are you from different countries& (国家)? —Yes, you are right.【小题5】Be quiet& (安静的), please. The baby is sleeping.【小题6】Can I take a message&(口信) for you ? she is out now .【小题7】The factory has many modern machines&(机器) .【小题8】The volunteers are going to work in two communities& (社区) this Sunday.【小题9】Mr. Li , our teacher, is checking&(检查) our homework in the office now.【小题10】 You can go to the centre of the city by subway/underground&. (地铁)B) 用所给动词的适当形式填空。【小题11】There are lots of interesting things to see& (see) in the Palace Museum.【小题12】What about ordering& (order) a pizza?【小题13】The Greens are showing&(show) you around the city tomorrow, aren’t they?【小题14】I hope Tomwill help&(help) me with my English.【小题15】The children all look forward tohaving& (have) a good time at school.
本题难度:一般
题型:填空题&|&来源:2014-江苏省阜宁县新沟中学七年级下学期期中考试英语卷
分析与解答
习题“单词拼写(共15小题,每小题1分,共15分)A).根据句意和汉语提示,在空白处填入适当的单词【小题1】I’m ____ (饥饿的), Mum. Can I have a snack?【小题2】Smoking i...”的分析与解答如下所示:
【小题1】句意:妈妈,我饿了,我可以吃点零食吗?hungry,形容词,饥饿的,故填hungry。【小题2】句意:吸烟对你的健康有害。be bad for对……有害,此处“健康”是名词,故填health。【小题3】句意:你有足够的时间来完成这项工作吗?enough time足够的时间,enough修饰名词要放在名词的前面,故填enough。【小题4】句意:你们来自不同的国家吗?是的,你是对的。不同的国家,就不是一个国家,所以要用复数,故填countries。【小题5】句意:请安静,这个婴儿在睡觉。 quiet,安静的,形容词,要放在be的后面做表语,故填quiet。【小题6】句意:我可以给你捎个口信吗?现在她出去了。take a message捎口信,故填message。【小题7】句意:这家工厂有许多现代化的机器。many修饰可数名词复数,故填machines。【小题8】句意:这个星期天志愿者们要去两个小区工作。 community小区,社区,因为是两个,所以要用复数,故填communities。【小题9】句意:我们的老师李老师现在正在办公室里检查我们的作业。时间状语now表明此处是现在进行时,故填checking。【小题10】句意:你可以乘地铁去市中心。by subway/underground乘地铁,故填subway/underground。【小题11】句意:在故宫有很多可看的有趣的东西。此处是不定式作定语,interesting things to see,可看的有趣的东西,所以要用see的不定式形式,故填to see。【小题12】句意:点一个比萨怎么样?about是介词,后跟动名词作宾语,故填ordering。【小题13】句意:格林一家明天要带你参观这个城市,不是吗?此处要用现在进行时表将来,故填are showing。【小题14】句意:我希望汤姆能帮我学英语。hope表明现在还没帮,所以要用将来时,故填will help。【小题15】句意:孩子们都期待着在学校里过得愉快。look forward to 盼望,期待,to是介词,后跟动名词,故填having。
找到答案了,赞一个
如发现试题中存在任何错误,请及时纠错告诉我们,谢谢你的支持!
单词拼写(共15小题,每小题1分,共15分)A).根据句意和汉语提示,在空白处填入适当的单词【小题1】I’m ____ (饥饿的), Mum. Can I have a snack?【小题2】Smok...
错误类型:
习题内容残缺不全
习题有文字标点错误
习题内容结构混乱
习题对应知识点不正确
分析解答残缺不全
分析解答有文字标点错误
分析解答结构混乱
习题类型错误
错误详情:
我的名号(最多30个字):
看完解答,记得给个难度评级哦!
经过分析,习题“单词拼写(共15小题,每小题1分,共15分)A).根据句意和汉语提示,在空白处填入适当的单词【小题1】I’m ____ (饥饿的), Mum. Can I have a snack?【小题2】Smoking i...”主要考察你对“Unit”“Topic”“Module”
等考点的理解。
因为篇幅有限,只列出部分考点,详细请访问。
与“单词拼写(共15小题,每小题1分,共15分)A).根据句意和汉语提示,在空白处填入适当的单词【小题1】I’m ____ (饥饿的), Mum. Can I have a snack?【小题2】Smoking i...”相似的题目:
Mrs. King is an American doctor. She is now in China. She works in a children’s hospital(儿童医院)in Beijing. She is also learning Chinese medicine (中医) there. She likes Chinese medicine very much. She loves working for children. She works hard in the day and reads English books on (关于) Chinese medicine at night. She learns Chinese from the Chinese doctors and her Chinese friends. Now she can speak some Chinese. She can read some Chinese books, too.Her husband Mr. King is a teacher. He teaches English in a junior high school in Beijing. He works hard, too. He works from Monday to Friday. He teaches three classes every day. Sometimes, on Saturdays and Sundays, he teaches other English classes. He wants to make more money (挣更多钱).根据短文内容,选择正确答案。【小题1】Mrs. King works in a &&&&.A.shopB.junior high school in BeijingC.hospital in BeijingD.hospital in America【小题2】Both of the Kings &&&&.A.live in AmericaB.live in ShanghaiC.work hardD.work every day【小题3】Mr. King works &&&&.A.in a junior high school in BeijingB.in a children’s hospital in BeijingC.in a junior high school in AmericaD.in a children’s hospital in America【小题4】Mrs. King learns Chinese from &&&&.A.her teacherB.the booksC.the doctorsD.the Chinese doctors and her Chinese friends【小题5】Which of the following sentences is NOT true?A.Mrs. King can speak English and Chinese.B.Mrs. King does’t know anything about Chinese medicine.C.Mrs. King is from America.D.Mr. King sometimes teaches other English classes on weekends.&&&&
根据短文内容,从方框中选择适当的词,并用其适当形式填空。from, by, always,
big, but, other, home, help, different, onWhat do people do in their free time? Some people like to stay at &&&&【小题1】&&&&. Some would like to go for a picnic. &&&&【小题2】&&&&&like to do sports.My friend Bruce works hard in an office &&&&【小题3】&&&&&Monday to Friday. On weekends, he does something &&&&【小题4】&&&&. He washes (洗) his car &&&&【小题5】&&&&&Saturdays. And on Sundays he goes to his uncle’s farm with his family &&&&【小题6】&&&&&car. The farm is not big, &&&&【小题7】&&&&&there is much work to do on it. The kids &&&&【小题8】&help with the animals. Bruce and his wife &&&&【小题9】&in the field (田地). Then they have a &&&&【小题10】&&&&&dinner in the evening. After dinner, they drive back home.&&&&
单词拼写&每个空格只准填一个单词。(共5小题,每小题1分,满分5分)【小题1】Bob won’t study in this school any m&&&&. He will move to America with his parents. 【小题2】Little Tom often helps the old clean the rooms in his f&&&& time. 【小题3】May I &&&&(play) the piano?【小题4】I am looking for my new book e&&&&, but I can’t find it. 【小题5】What &&&& (尺码) are your shoes?
“单词拼写(共15小题,每小题1分,共15...”的最新评论
该知识点好题
该知识点易错题
欢迎来到乐乐题库,查看习题“单词拼写(共15小题,每小题1分,共15分)A).根据句意和汉语提示,在空白处填入适当的单词【小题1】I’m ____ (饥饿的), Mum. Can I have a snack?【小题2】Smoking is bad for your ____ (健康).【小题3】Do you have ____ (足够的) time to finish the work?【小题4】—Are you from different ____ (国家)? —Yes, you are right.【小题5】Be ____ (安静的), please. The baby is sleeping.【小题6】Can I take a ____(口信) for you ? she is out now .【小题7】The factory has many modern ____(机器) .【小题8】The volunteers are going to work in two ____ (社区) this Sunday.【小题9】Mr. Li , our teacher, is ____(检查) our homework in the office now.【小题10】 You can go to the centre of the city by ____. (地铁)B) 用所给动词的适当形式填空。【小题11】There are lots of interesting things ____ (see) in the Palace Museum.【小题12】What about ____ (order) a pizza?【小题13】The Greens ____(show) you around the city tomorrow, aren’t they?【小题14】I hope Tom____(help) me with my English.【小题15】The children all look forward to____ (have) a good time at school.”的答案、考点梳理,并查找与习题“单词拼写(共15小题,每小题1分,共15分)A).根据句意和汉语提示,在空白处填入适当的单词【小题1】I’m ____ (饥饿的), Mum. Can I have a snack?【小题2】Smoking is bad for your ____ (健康).【小题3】Do you have ____ (足够的) time to finish the work?【小题4】—Are you from different ____ (国家)? —Yes, you are right.【小题5】Be ____ (安静的), please. The baby is sleeping.【小题6】Can I take a ____(口信) for you ? she is out now .【小题7】The factory has many modern ____(机器) .【小题8】The volunteers are going to work in two ____ (社区) this Sunday.【小题9】Mr. Li , our teacher, is ____(检查) our homework in the office now.【小题10】 You can go to the centre of the city by ____. (地铁)B) 用所给动词的适当形式填空。【小题11】There are lots of interesting things ____ (see) in the Palace Museum.【小题12】What about ____ (order) a pizza?【小题13】The Greens ____(show) you around the city tomorrow, aren’t they?【小题14】I hope Tom____(help) me with my English.【小题15】The children all look forward to____ (have) a good time at school.”相似的习题。Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0 freeedition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks
Search Discussions
I tried the below still no luckchmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmp ThanksOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks
Your &root& user seems to lack a home directory which it is trying touse for itself.Do this:sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootThen run your program, and it should work.On Mon, Aug 13, 2012 at 8:25 PM, Mike wrote:I tried the below still no luckchmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmp ThanksOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Harsh J
Thanks Harsh. You da man!On Monday, August 13, :47 PM UTC-4, Harsh J wrote:Your &root& user seems to lack a home directory which it is trying touse for itself.Do this:sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootThen run your program, and it should work.On Mon, Aug 13, 2012 at 8:25 PM, Mike &mike...@gmail.com &javascript:&&wrote:I tried the below still no luckchmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmp ThanksOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Harsh J
Hey Mike,Glad it helped. Would appreciate your ideas on to help improve suchmessages.On Mon, Aug 13, 2012 at 10:31 PM, Mike wrote:Thanks Harsh. You da man!On Monday, August 13, :47 PM UTC-4, Harsh J wrote:Your &root& user seems to lack a home directory which it is trying touse for itself.Do this:sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootThen run your program, and it should work.On Mon, Aug 13, 2012 at 8:25 PM, Mike wrote:I tried the below still no luckchmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmp ThanksOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Harsh J--Harsh J
Hi Harsh,I am seeing a similar problem but your suggestion (i.e. make the userdirectory and change ownership did not work). I am following theinstructions for CDH4here: I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).When I execute the following command as any user other than the hdfs user Iget this stack trace (it works if I am the hdfs user)## Command/usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'## Error:Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=joe, access=EXECUTE, inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx-----If I list the directory contents as joe:hadoop fs -ls inputFound 3 items-rw-r--r--
1 joe supergroup
-22 08:53 input/core-site.xml-rw-r--r--
1 joe supergroup
-22 08:53 input/hdfs-site.xml-rw-r--r--
1 joe supergroup
-22 08:53 input/mapred-site.xmlAnd as far as I can tell, that user should be able to execute the mapreduce job.Any help or other areas to look at would be much appreciated.Thanks,AndyOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks
Either change the permissions on /var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or change the mapred.system.dir to be /user/${user.name}/.staging-Joey--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.On Wednesday, August 22, 2012 at 10:28, Andy wrote:Hi Harsh,I am seeing a similar problem but your suggestion (i.e. make the user directory and change ownership did not work). I am following the instructions for CDH4 here: I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).When I execute the following command as any user other than the hdfs user I get this stack trace (it works if I am the hdfs user)## Command/usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'## Error:Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=joe, access=EXECUTE, inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx-----If I list the directory contents as joe:hadoop fs -ls inputFound 3 items-rw-r--r-- 1 joe supergroup -22 08:53 input/core-site.xml-rw-r--r-- 1 joe supergroup -22 08:53 input/hdfs-site.xml-rw-r--r-- 1 joe supergroup -22 08:53 input/mapred-site.xmlAnd as far as I can tell, that user should be able to execute the mapreduce job.Any help or other areas to look at would be much appreciated.Thanks,AndyOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0 free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load data from Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data, hadoop(hdfs) is giving this error.Any thoughts?Thanks
Thanks very much Joey.I am still having trouble running:$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'As any user other than hdfs. The ability to successfully run the job ashdfs makes me think my whole installation is not hosed...I wonder if the error is a red herring, I say that because the directoryreferenced in the error:node=&/var/lib/hadoop-hdfs/cache/mapred/mapred/stagingDoes not even seem to exist. For example, if I:$ cd /var/lib/hadoop-hdfs/cache/mapred/mapred$ ls -ltotal 4drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 localOr search for it:$ find / -type d -name &staging& 2& /dev/null/lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/stagingI don't see the staging directory. Irregardless, I tried changing allpermissions under the mapred directory:chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapredTo check that it updated the permissions:cd /var/lib/hadoop-hdfs/cache/mapred/mapredstat -c '%A %a %n' *drwxrwxrwx 777 localAs far as I can tell at this point, everything has read write and executepermissions. I also compared the groups for joe vs. hdfs:groups joejoe : joe hdfs hdusersgroups hdfshdfs : hdfs hdusersAnd I still get the same error when running the command....Permission denied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx------Any thoughts or advice are much appreciated.- AndyOn Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria wrote:Either change the permissions on /var/lib/hadoop-hdfs/cache/mapred/mapred/stagingto be 777 or change the mapred.system.dir to be /user/${user.name}/.staging-Joey--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.On Wednesday, August 22, 2012 at 10:28, Andy wrote:Hi Harsh,I am seeing a similar problem but your suggestion (i.e. make the userdirectory and change ownership did not work). I am following theinstructions for CDH4 here:.I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).When I execute the following command as any user other than the hdfs userI get this stack trace (it works if I am the hdfs user)## Command/usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'## Error:Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=joe, access=EXECUTE, inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx-----If I list the directory contents as joe:hadoop fs -ls inputFound 3 items-rw-r--r--
1 joe supergroup
-22 08:53 input/core-site.xml-rw-r--r--
1 joe supergroup
-22 08:53 input/hdfs-site.xml-rw-r--r--
1 joe supergroup
-22 08:53 input/mapred-site.xmlAnd as far as I can tell, that user should be able to execute the mapreduce job.Any help or other areas to look at would be much appreciated.Thanks,AndyOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Andrew SunderlandSpry Enterprises, Inc.443.831.5476
The directory is in HDFS, not the local file system. So you need to dosomething like the following:sudo -u hdfs hadoop fs -chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapredOn Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderlandwrote:Thanks very much Joey.I am still having trouble running:$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'As any user other than hdfs. The ability to successfully run the job as hdfsmakes me think my whole installation is not hosed...I wonder if the error is a red herring, I say that because the directoryreferenced in the error:node=&/var/lib/hadoop-hdfs/cache/mapred/mapred/stagingDoes not even seem to exist. For example, if I:$ cd /var/lib/hadoop-hdfs/cache/mapred/mapred$ ls -ltotal 4drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 localOr search for it:$ find / -type d -name &staging& 2& /dev/null/lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/stagingI don't see the staging directory. Irregardless, I tried changing allpermissions under the mapred directory:chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapredTo check that it updated the permissions:cd /var/lib/hadoop-hdfs/cache/mapred/mapredstat -c '%A %a %n' *drwxrwxrwx 777 localAs far as I can tell at this point, everything has read write and executepermissions. I also compared the groups for joe vs. hdfs:groups joejoe : joe hdfs hdusersgroups hdfshdfs : hdfs hdusersAnd I still get the same error when running the command....Permission denied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx------Any thoughts or advice are much appreciated.- AndyOn Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria wrote:Either change the permissions on/var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or change themapred.system.dir to be /user/${user.name}/.staging-Joey--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.On Wednesday, August 22, 2012 at 10:28, Andy wrote:Hi Harsh,I am seeing a similar problem but your suggestion (i.e. make the userdirectory and change ownership did not work). I am following theinstructions for CDH4 here:I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).When I execute the following command as any user other than the hdfs userI get this stack trace (it works if I am the hdfs user)## Command/usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'## Error:Caused by: org.apache.hadoop.security.AccessControlException: Permissiondenied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx-----If I list the directory contents as joe:hadoop fs -ls inputFound 3 items-rw-r--r--
1 joe supergroup
-22 08:53input/core-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/hdfs-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/mapred-site.xmlAnd as far as I can tell, that user should be able to execute themapreduce job.Any help or other areas to look at would be much appreciated.Thanks,AndyOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Andrew SunderlandSpry Enterprises, Inc.443.831.5476--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.
Thanks Joey, that was exactly it.If it is helpful, I documented the steps I took in addition to what youguys provide here:I don't know if it would be useful for your documentation going forward. Ifso, let me know and I will send it your way.- AndyOn Monday, August 27, :12 PM UTC-4, Joey Echeverria wrote:The directory is in HDFS, not the local file system. So you need to dosomething like the following:sudo -u hdfs hadoop fs -chmod -R 777/var/lib/hadoop-hdfs/cache/mapred/mapredOn Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderland&asund...@spryinc.com &javascript:&& wrote:Thanks very much Joey.I am still having trouble running:$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'As any user other than hdfs. The ability to successfully run the job as hdfsmakes me think my whole installation is not hosed...I wonder if the error is a red herring, I say that because the directoryreferenced in the error:node=&/var/lib/hadoop-hdfs/cache/mapred/mapred/stagingDoes not even seem to exist. For example, if I:$ cd /var/lib/hadoop-hdfs/cache/mapred/mapred$ ls -ltotal 4drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 localOr search for it:$ find / -type d -name &staging& 2& /dev/null/lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/stagingI don't see the staging directory. Irregardless, I tried changing allpermissions under the mapred directory:chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapredTo check that it updated the permissions:cd /var/lib/hadoop-hdfs/cache/mapred/mapredstat -c '%A %a %n' *drwxrwxrwx 777 localAs far as I can tell at this point, everything has read write and executepermissions. I also compared the groups for joe vs. hdfs:groups joejoe : joe hdfs hdusersgroups hdfshdfs : hdfs hdusersAnd I still get the same error when running the command....Permission denied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx------Any thoughts or advice are much appreciated.- AndyOn Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria wrote:Either change the permissions on/var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or changethemapred.system.dir to be /user/${user.name}/.staging-Joey--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.On Wednesday, August 22, 2012 at 10:28, Andy wrote:Hi Harsh,I am seeing a similar problem but your suggestion (i.e. make the userdirectory and change ownership did not work). I am following theinstructions for CDH4 here:I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).When I execute the following command as any user other than the hdfsuserI get this stack trace (it works if I am the hdfs user)## Command/usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'## Error:Caused by: org.apache.hadoop.security.AccessControlException:Permissiondenied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx-----If I list the directory contents as joe:hadoop fs -ls inputFound 3 items-rw-r--r--
1 joe supergroup
-22 08:53input/core-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/hdfs-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/mapred-site.xmlAnd as far as I can tell, that user should be able to execute themapreduce job.Any help or other areas to look at would be much appreciated.Thanks,AndyOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Andrew SunderlandSpry Enterprises, Inc.asund...@spryinc.com &javascript:&443.831.5476--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.
I'd love to see what you wrote. Improving our docs is always a good thing.-JoeyOn Mon, Aug 27, 2012 at 4:53 PM, Andy wrote:Thanks Joey, that was exactly it.If it is helpful, I documented the steps I took in addition to what you guysprovide here:I don't know if it would be useful for your documentation going forward. Ifso, let me know and I will send it your way.- AndyOn Monday, August 27, :12 PM UTC-4, Joey Echeverria wrote:The directory is in HDFS, not the local file system. So you need to dosomething like the following:sudo -u hdfs hadoop fs -chmod -R 777/var/lib/hadoop-hdfs/cache/mapred/mapredOn Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderlandwrote:Thanks very much Joey.I am still having trouble running:$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'As any user other than hdfs. The ability to successfully run the job ashdfsmakes me think my whole installation is not hosed...I wonder if the error is a red herring, I say that because the directoryreferenced in the error:node=&/var/lib/hadoop-hdfs/cache/mapred/mapred/stagingDoes not even seem to exist. For example, if I:$ cd /var/lib/hadoop-hdfs/cache/mapred/mapred$ ls -ltotal 4drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 localOr search for it:$ find / -type d -name &staging& 2& /dev/null/lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/stagingI don't see the staging directory. Irregardless, I tried changing allpermissions under the mapred directory:chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapredTo check that it updated the permissions:cd /var/lib/hadoop-hdfs/cache/mapred/mapredstat -c '%A %a %n' *drwxrwxrwx 777 localAs far as I can tell at this point, everything has read write andexecutepermissions. I also compared the groups for joe vs. hdfs:groups joejoe : joe hdfs hdusersgroups hdfshdfs : hdfs hdusersAnd I still get the same error when running the command....Permission denied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx------Any thoughts or advice are much appreciated.- AndyOn Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria &jo...@cloudera.com&wrote:Either change the permissions on/var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or changethemapred.system.dir to be /user/${user.name}/.staging-Joey--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.On Wednesday, August 22, 2012 at 10:28, Andy wrote:Hi Harsh,I am seeing a similar problem but your suggestion (i.e. make the userdirectory and change ownership did not work). I am following theinstructions for CDH4 here:I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).When I execute the following command as any user other than the hdfsuserI get this stack trace (it works if I am the hdfs user)## Command/usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'## Error:Caused by: org.apache.hadoop.security.AccessControlException:Permissiondenied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx-----If I list the directory contents as joe:hadoop fs -ls inputFound 3 items-rw-r--r--
1 joe supergroup
-22 08:53input/core-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/hdfs-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/mapred-site.xmlAnd as far as I can tell, that user should be able to execute themapreduce job.Any help or other areas to look at would be much appreciated.Thanks,AndyOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Andrew SunderlandSpry Enterprises, Inc.asund...@spryinc.com443.831.5476--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.
Attached is what I have, most of the content is what you guys provided. Theareas where I ended up doing a little debugging where:- Configuring the firewall rules for RHEL- Installing the JDK- You guys detail this, I found unless I edited etc/profile and etc/sudoersit would not work- Updating the file permissionsThis was my first attempt at anything Hadoop related, so some of this wasmost likely beginner error. After walking through it once, it doesn't seemlike it would take more than an hour to recreate the set up.- AndyOn Mon, Aug 27, 2012 at 4:59 PM, Joey Echeverria wrote:I'd love to see what you wrote. Improving our docs is always a good thing.-JoeyOn Mon, Aug 27, 2012 at 4:53 PM, Andy wrote:Thanks Joey, that was exactly it.If it is helpful, I documented the steps I took in addition to what you guysprovide here:I don't know if it would be useful for your documentation going forward. Ifso, let me know and I will send it your way.- AndyOn Monday, August 27, :12 PM UTC-4, Joey Echeverria wrote:The directory is in HDFS, not the local file system. So you need to dosomething like the following:sudo -u hdfs hadoop fs -chmod -R 777/var/lib/hadoop-hdfs/cache/mapred/mapredOn Mon, Aug 27, 2012 at 11:34 AM, Andrew Sunderlandwrote:Thanks very much Joey.I am still having trouble running:$ /usr/bin/hadoop jar/usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'As any user other than hdfs. The ability to successfully run the jobashdfsmakes me think my whole installation is not hosed...I wonder if the error is a red herring, I say that because thedirectoryreferenced in the error:node=&/var/lib/hadoop-hdfs/cache/mapred/mapred/stagingDoes not even seem to exist. For example, if I:$ cd /var/lib/hadoop-hdfs/cache/mapred/mapred$ ls -ltotal 4drwxrwxrwx. 8 mapred mapred 4096 Aug 27 08:46 localOr search for it:$ find / -type d -name &staging& 2& /dev/null/lib/modules/2.6.32-276.el6.x86_64/kernel/drivers/stagingI don't see the staging directory. Irregardless, I tried changing allpermissions under the mapred directory:chmod -R 777 /var/lib/hadoop-hdfs/cache/mapred/mapredTo check that it updated the permissions:cd /var/lib/hadoop-hdfs/cache/mapred/mapredstat -c '%A %a %n' *drwxrwxrwx 777 localAs far as I can tell at this point, everything has read write andexecutepermissions. I also compared the groups for joe vs. hdfs:groups joejoe : joe hdfs hdusersgroups hdfshdfs : hdfs hdusersAnd I still get the same error when running the command....Permission denied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx------Any thoughts or advice are much appreciated.- AndyOn Wed, Aug 22, 2012 at 1:38 PM, Joey Echeverria &jo...@cloudera.com&wrote:Either change the permissions on/var/lib/hadoop-hdfs/cache/mapred/mapred/staging to be 777 or changethemapred.system.dir to be /user/${user.name}/.staging-Joey--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.On Wednesday, August 22, 2012 at 10:28, Andy wrote:Hi Harsh,I am seeing a similar problem but your suggestion (i.e. make the userdirectory and change ownership did not work). I am following theinstructions for CDH4 here:.I am performing the install on RHEL 6.3 (http://aws.amazon.com/rhel/).When I execute the following command as any user other than the hdfsuserI get this stack trace (it works if I am the hdfs user)## Command/usr/bin/hadoop jar/usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jargrep input output 'dfs[a-z.]+'## Error:Caused by: org.apache.hadoop.security.AccessControlException:Permissiondenied: user=joe, access=EXECUTE,inode=&/var/lib/hadoop-hdfs/cache/mapred/mapred/staging&:hdfs:supergroup:drwx-----If I list the directory contents as joe:hadoop fs -ls inputFound 3 items-rw-r--r--
1 joe supergroup
-22 08:53input/core-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/hdfs-site.xml-rw-r--r--
1 joe supergroup
-22 08:53input/mapred-site.xmlAnd as far as I can tell, that user should be able to execute themapreduce job.Any help or other areas to look at would be much appreciated.Thanks,AndyOn Monday, August 13, :52 AM UTC-4, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to loaddatafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write thedata,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Andrew SunderlandSpry Enterprises, Inc.asund...@spryinc.com443.831.5476--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.--Joey EcheverriaPrincipal Solutions ArchitectCloudera, Inc.--Andrew SunderlandSpry Enterprises, Inc.443.831.5476
Hello guys.I have tried all of the above, but can't seem to make my mapreduce work. Iam using CDH4. It fails to start the job tracker. The error log is:org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode=&/&:hdfs:supergroup:drwxr-xr-xat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)at java.lang.reflect.Constructor.newInstance(Constructor.java:513)at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1741)at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:482)at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1731)at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:503)at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:2053)at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4799)Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode=&/&:hdfs:supergroup:drwxr-xr-xat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)at org.apache.hadoop.ipc.Client.call(Client.java:1161)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)at $Proxy10.mkdirs(Unknown Source)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)at java.lang.reflect.Method.invoke(Method.java:597)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)at $Proxy10.mkdirs(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:420)at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1739)... 8 morePlease help :(
Hi,Have you followed the MR1 deployment guide atSpecifically, your JT needs this step to have been taken care ofbefore it starts:Are you using Cloudera Manager? It helps avoid having to run thesesteps manually.On Wed, Sep 5, 2012 at 2:19 PM, Muhammad Mohsin Aliwrote:Hello guys.I have tried all of the above, but can't seem to make my mapreduce work. Iam using CDH4. It fails to start the job tracker. The error log is:org.apache.hadoop.security.AccessControlException: Permission denied:user=mapred, access=WRITE, inode=&/&:hdfs:supergroup:drwxr-xr-xatorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)atorg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)at java.lang.reflect.Constructor.newInstance(Constructor.java:513)atorg.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)atorg.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1741)atorg.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:482)at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1731)at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:503)at org.apache.hadoop.mapred.JobTracker.&init&(JobTracker.java:2284)at org.apache.hadoop.mapred.JobTracker.&init&(JobTracker.java:2053)at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4799)Caused by: org.apache.hadoop.security.AccessControlException: Permissiondenied: user=mapred, access=WRITE, inode=&/&:hdfs:supergroup:drwxr-xr-xatorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)atorg.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)atorg.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)atorg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)atorg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)at org.apache.hadoop.ipc.Client.call(Client.java:1161)atorg.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:184)at $Proxy10.mkdirs(Unknown Source)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)at java.lang.reflect.Method.invoke(Method.java:597)atorg.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:165)atorg.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:84)at $Proxy10.mkdirs(Unknown Source)atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:420)at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1739)... 8 morePlease help :(--Harsh J
Hi Mike,I have a CentOS hadoop ckuster and I face the same problem when i try tocreate a table in hive.I tried,chmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmpand I also tried..sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootBut it dint work!Any suggestion?On Monday, 13 August :52 UTC-7, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks
Are you logged in as root or hdfs? If you su - hdfs and try hive it shouldworkOn Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:Hi Mike,I have a CentOS hadoop ckuster and I face the same problem when i try tocreate a table in hive.I tried,chmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmpand I also tried..sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootBut it dint work!Any suggestion?On Monday, 13 August :52 UTC-7, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Thanks,*Ram Krishnamurthy*rkrishnamurthy@greenway-solutions.com*Cell: 704-953-8125*
The simplest answer is :*
Disabling the dfs permission.By adding below property code toconf/hdfs-site.xml&&&&property&&&&&&&name&dfs.permissions&/name&&&&&&&value&false&/value&&&&&/property&Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to makeyour hadoop cluster).CHEERS !!*On Wednesday, November 28, :32 AM UTC+5:30, Ram Krishnamurthywrote:Are you logged in as root or hdfs? If you su - hdfs and try hive it shouldworkOn Tue, Nov 27, 2012 at 9:50 PM, Thilak &gs.t...@gmail.com &javascript:&&wrote:Hi Mike,I have a CentOS hadoop ckuster and I face the same problem when i try tocreate a table in hive.I tried,chmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmpand I also tried..sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootBut it dint work!Any suggestion?On Monday, 13 August :52 UTC-7, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Thanks,*Ram Krishnamurthy*rkrishnamurthy@greenway-solutions.com &javascript:&*Cell: 704-953-8125*
Hi,I had this problem recently with apache hadoop 1.0.4 and found thefollowing solution:*Problem:*job initialization failed:org.apache.hadoop.security.AccessControlException:org.apache.hadoop.security.AccessControlException: Permission denied:user=root, access=EXECUTE, inode=&system&:hadoop:root:rwx------ atsun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source) atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at*Look at:*hadoop fs -ls /tmp/hadoop-hadoop/mapred/systemFound 1 items-rw-------
3 hadoop root
14:38/tmp/hadoop-hadoop/mapred/system/jobtracker.info(compare the above with the error message)*FIX**As hadoop*hadoop fs -chown -R hadoop:hadoop
/tmp/hadoop-hadoop/mapred/systemhadoop fs -chmod -R 777 /tmp/hadoop-hadoop/mapred/system
hey Kavish,I'm using Cloudera Manager to configure the cluster. Now I came across withthe same problem as it. You mentioned &*Do this in CDH4 Manager HDFSCONFIGURATION (if you are using CDH4 to make your hadoop cluster). *&Now I open up the setting page for hdfs1 and &configure&, but don't knowwhich file to edit. Could you please provide more details in what and whereto add?ThanksOn Saturday, February 23, :37 AM UTC-6, Kavish Ahuja wrote:The simplest answer is :*
Disabling the dfs permission.By adding below property code toconf/hdfs-site.xml&property&&name&dfs.permissions&/name&&value&false&/value&&/property&Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to makeyour hadoop cluster).CHEERS !!*On Wednesday, November 28, :32 AM UTC+5:30, Ram Krishnamurthywrote:Are you logged in as root or hdfs? If you su - hdfs and try hive itshould workOn Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:Hi Mike,I have a CentOS hadoop ckuster and I face the same problem when i try tocreate a table in hive.I tried,chmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmpand I also tried..sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootBut it dint work!Any suggestion?On Monday, 13 August :52 UTC-7, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to load datafrom Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Thanks,*Ram Krishnamurthy*rkrishnamurthy@greenway-solutions.com*Cell: 704-953-8125*
Hi Fiona,Turning off security is not recommended nor necessary. Usually you run yourhive queries or map reduce jobs as a non-system user (not hive, root, hdfs,but something like fiona or bob).You also need to create the home directory for these users. If you areusing Hue, you can very easily create a user in Hue and check the option tocreate the home directory. If not, you can run:sudo -u hdfs hdfs dfs -mkdir /user/fionasudo -u hdfs hdfs dfs -chown fiona:fiona /user/fionaAt this point, running hive jobs while logged in as user &fiona& shouldwork.If you really want to disable permission checking, then you can edit theHDFS configuration via the Cloudera Manager UI. Look for the property&Check HDFS Permissions&, which should appear by default along with allother service-wide configs. If it doesn't appear, search for it using thesearch box on the left. Then restart HDFS.Thanks,DarrenOn Wed, May 15, 2013 at 12:00 PM, fiona ren wrote:hey Kavish,I'm using Cloudera Manager to configure the cluster. Now I came acrosswith the same problem as it. You mentioned &*Do this in CDH4 Manager HDFSCONFIGURATION (if you are using CDH4 to make your hadoop cluster). *&Now I open up the setting page for hdfs1 and &configure&, but don't knowwhich file to edit. Could you please provide more details in what and whereto add?ThanksOn Saturday, February 23, :37 AM UTC-6, Kavish Ahuja wrote:The simplest answer is :*
Disabling the dfs permission.By adding below property code toconf/hdfs-site.xml&property&&name&dfs.permissions&/name&&value&false&/value&&/property&Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 to makeyour hadoop cluster).CHEERS !!*On Wednesday, November 28, :32 AM UTC+5:30, Ram Krishnamurthywrote:Are you logged in as root or hdfs? If you su - hdfs and try hive itshould workOn Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:Hi Mike,I have a CentOS hadoop ckuster and I face the same problem when i tryto create a table in hive.I tried,chmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmpand I also tried..sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootBut it dint work!Any suggestion?On Monday, 13 August :52 UTC-7, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager 4.0free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to loaddata from Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write the data,hadoop(hdfs) is giving this error.Any thoughts?Thanks--Thanks,*Ram Krishnamurthy*rkrishnamurthy@greenway-**solutions.com*Cell: 704-953-8125*--Thanks,Darren
Hi Darren,Really appreciate your prompt reply!Here's my situation, I ran these commandints = to.dfs(1:100)calc = mapreduce(input = ints,&&&&&&&&&&&&&&&&&&&&map = function(k, v) cbind(v, 2*v))from.dfs(calc)and came across with these errors:packageJobJar: [/tmp/Rtmpg8rLZs/rmr-local-env6a2e33fc3b5a, /tmp/Rtmpg8rLZs/rmr-global-env6a2e13cfe97b, /tmp/Rtmpg8rLZs/rmr-streaming-map6a2e, /tmp/hadoop-dlabadmin/hadoop-unjar5520558/] [] /tmp/streamjob2864307.jar tmpDir=null13/05/15 12:52:13 ERROR security.UserGroupInformation: PriviledgedActionException as:dlabadmin (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=dlabadmin, access=WRITE, inode=&/user&:hdfs:supergroup:drwxr-xr-x&&at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)&&at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)&&at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)&&at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)&&at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)&&at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)&&at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)&&at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)&&at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)&&at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)&&at java.security.AccessController.doPrivileged(Native Method)&&at javax.security.auth.Subject.doAs(Subject.java:396)&&at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)&&at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)13/05/15 12:52:13 ERROR streaming.StreamJob: Error Launching job : Permission denied: user=dlabadmin, access=WRITE, inode=&/user&:hdfs:supergroup:drwxr-xr-x&&at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)&&at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)&&at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4655)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2996)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2960)&&at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2938)&&at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)&&at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)&&at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)&&at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)&&at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)&&at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)&&at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)&&at java.security.AccessController.doPrivileged(Native Method)&&at javax.security.auth.Subject.doAs(Subject.java:396)&&at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)&&at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)Streaming Command Failed!Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce,
:&&&hadoop streaming failed with error code 5DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.rmr: DEPRECATED: Please use 'rm -r' instead.13/05/15 12:52:18 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://ub12hdpmaster:8020/user/dlabadmin/.Trash/Current/tmp/Rtmpg8rLZsrmr: Failed to move to trash: hdfs://ub12hdpmaster:8020/tmp/Rtmpg8rLZs/file6a2e5d588a87. Consider using -skipTrash option& hdfs.ls(&/tmp&)My understanding of these errors is admin user only has &write& privilegeto this hdfs files.I try to find a way to grand more privileges to admin user.Let me know whether my understanding is correct.Thanks,FionaOn Wednesday, May 15, :09 PM UTC-5, Darren Lo wrote:Hi Fiona,Turning off security is not recommended nor necessary. Usually you runyour hive queries or map reduce jobs as a non-system user (not hive, root,hdfs, but something like fiona or bob).You also need to create the home directory for these users. If you areusing Hue, you can very easily create a user in Hue and check the option tocreate the home directory. If not, you can run:sudo -u hdfs hdfs dfs -mkdir /user/fionasudo -u hdfs hdfs dfs -chown fiona:fiona /user/fionaAt this point, running hive jobs while logged in as user &fiona& shouldwork.If you really want to disable permission checking, then you can edit theHDFS configuration via the Cloudera Manager UI. Look for the property&Check HDFS Permissions&, which should appear by default along with allother service-wide configs. If it doesn't appear, search for it using thesearch box on the left. Then restart HDFS.Thanks,DarrenOn Wed, May 15, 2013 at 12:00 PM, fiona ren &fiona....@gmail.com&javascript:&wrote: hey Kavish,I'm using Cloudera Manager to configure the cluster. Now I came acrosswith the same problem as it. You mentioned &*Do this in CDH4 ManagerHDFS CONFIGURATION (if you are using CDH4 to make your hadoop cluster). *&Now I open up the setting page for hdfs1 and &configure&, but don't knowwhich file to edit. Could you please provide more details in what and whereto add?ThanksOn Saturday, February 23, :37 AM UTC-6, Kavish Ahuja wrote:The simplest answer is :*
Disabling the dfs permission.By adding below property code toconf/hdfs-site.xml&property&&name&dfs.permissions&/name&&value&false&/value&&/property&Do this in CDH4 Manager HDFS CONFIGURATION (if you are using CDH4 tomake your hadoop cluster).CHEERS !!*On Wednesday, November 28, :32 AM UTC+5:30, Ram Krishnamurthywrote:Are you logged in as root or hdfs? If you su - hdfs and try hive itshould workOn Tue, Nov 27, 2012 at 9:50 PM, Thilak wrote:Hi Mike,I have a CentOS hadoop ckuster and I face the same problem when i tryto create a table in hive.I tried,chmod 777 /tmp sudo -u hdfs hadoop fs -chmod -R 1777 /tmpand I also tried..sudo -u hdfs hadoop fs -mkdir /user/rootsudo -u hdfs hadoop fs -chown root:root /user/rootBut it dint work!Any suggestion?On Monday, 13 August :52 UTC-7, Mike wrote:Hi All,I have 2 nodes CentOS hadoop cluster. I installed cloudera manager4.0 free edition and CDH 4.0.3.Installation went smooth.I installed sqoop on it today. When I run the sqoop import to loaddata from Oracle11g to Hbase,I get the subject error.I am running it as root user. I am still getting the same error.Sqoop creates the table fine. But when it tries to the write thedata, hadoop(hdfs) is giving this error.Any thoughts?Thanks--Thanks,*Ram Krishnamurthy*rkrishnamurthy@greenway-**solutions.com*Cell: 704-953-8125*--Thanks,Darren
Hi Fiona,The problem is usually that mapreduce is trying to create something in theuser dir, in your case /user/dlabadmin. Since the user &dlabadmin& doesn'thave write privileges to /user, this fails. If you create your user's homedirectory in hdfs and chown it to the right user, you'll probably get pastthis error. dlabadmin doesn't actually need write permissions to /user,just needs the home directory created.Thanks,DarrenOn Wed, May 15, 2013 at 12:21 PM, fiona ren wrote:Hi Darren,Really appreciate your prompt reply!Here's my situation, I ran these comman

我要回帖

更多关于 请问奥义quot 的文章

 

随机推荐