. please advise
Regards
On Sat, 18 Apr 2020, 1:20 am Pau Tallada, wrote:
> Hi,
>
> You have to use HTTP requests to interact with an webhdfs endpoint.
>
> See: https://dzone.com/articles/hadoop-rest-api-webhdfs
>
> Missatge de Hamza Asad del dia dv., 17 d’abr.
> 2020 a les
Dear member,
I just want to know can we copy a file from local/remote server to HDFS
using webHDFS command? I know this might be not the right forum but im
unable to find a proper solution and command. Can some one help in
this matter?
There is a hadoop fs command
Hadoop -copyFromLocal < Destina
ble_name1;
>
>
> the error i am facing is
>
> OK
> FAILED: SemanticException [Error 10098]: Non-Partition column appears in the
> partition specification: col1
>
>
>
> Please help me what I am missing.
>
> --
>
> *Kishore Kumar*
> ITIM
>
>
--
*Muhammad Hamza Asad*
ble_name1;
>
>
> the error i am facing is
>
> OK
> FAILED: SemanticException [Error 10098]: Non-Partition column appears in the
> partition specification: col1
>
>
>
> Please help me what I am missing.
>
> --
>
> *Kishore Kumar*
> ITIM
>
>
--
*Muhammad Hamza Asad*
Let me know your thoughts.
>
>
>
> --
> *Thanks & Regards *
>
>
> *Unmesha Sreeveni U.B*
> *Hadoop, Bigdata Developer*
> *Center for Cyber Security | Amrita Vishwa Vidyapeetham*
> http://www.unmeshasreeveni.blogspot.in/
>
>
>
--
*Muhammad Hamza Asad*
as voted to make Thejas Nair a committer on the
> Apache
> >> Hive project.
> >>
> >> Please join me in congratulating Thejas!
> >>
>
--
*Muhammad Hamza Asad*
at :)
>
>
>
> On Wed, Jul 17, 2013 at 11:10 PM, Hamza Asad wrote:
>
>> Please let me knw which approach is better. Either i save my data
>> directly to HDFS and run hive (shark) queries over it OR store my data in
>> HBASE, and then query it.. as i want to ensure eff
Please let me knw which approach is better. Either i save my data directly
to HDFS and run hive (shark) queries over it OR store my data in HBASE, and
then query it.. as i want to ensure efficient data retrieval and data
remains safe and can easily recover if hadoop crashes.
--
*Muhammad Hamza
AILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask
*
why is this so?
--
*Muhammad Hamza Asad*
gt; file, position in the file, entire input line) available in the associated
>> map task log.
>>
>> Jarcec
>>
>> On Tue, Jun 18, 2013 at 03:14:52PM +, Arafat, Moiz wrote:
>> > Can you try using default value ex 0 or 999 instead of storing NULL
>> i
t.
> for more you can refer this
> http://stackoverflow.com/questions/16886668/why-sqoop-fails-on-numberformatexception-for-numeric-column-during-the-export-fr
>
>
> On Tue, Jun 18, 2013 at 5:52 PM, Hamza Asad wrote:
>
>> Attached are the schema files of both HIVE and mySql
lumn with bigint
>
>
>
> On Tue, Jun 18, 2013 at 5:37 PM, Hamza Asad wrote:
>
>> I have copy paste the ROW in office writer where i saw its # separated...
>> yeah \N values representing NULL..
>> the version of sqoop is
>> *Sqoop 1.4.2
>> git commi
eld separator?
> also the separator is normally an octal representation so you can give it
> a try.
>
> why does your columns have \N as values? is it for NULL ?
>
> what version of sqoop are you using?
>
>
> On Tue, Jun 18, 2013 at 5:00 PM, Hamza Asad wrote:
>
>>
oop export
>
>
> On Tue, Jun 18, 2013 at 4:31 PM, Hamza Asad wrote:
>
>> I want to export my table in mysql and for that i'm using sqoop export
>> command but in HDFS i've data apparantly without any field seperator But it
>> does contain some field separat
N\N\N\N\N8\N\N\N\N\N1\N\N32\N1
*
how can i export this type of data to mysql and what field separator i
mention it there.. Please help
--
*Muhammad Hamza Asad*
(DAGScheduler.scala:269)
at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:90)
FAILED: Execution Error, return code -101 from shark.execution.SparkTask
*
why it giving me exception?
--
*Muhammad Hamza Asad*
(DAGScheduler.scala:269)
at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:90)
FAILED: Execution Error, return code -101 from shark.execution.SparkTask
*
why it giving me exception?
On Fri, Jun 14, 2013 at 1:38 PM, Hamza Asad wrote:
> ok .. got it.. Thanx :)
> p.s Nitin, have
_names) select cols,
> do_data(event_date) from table
>
> this is how it should look like
> hive will take care of inserting into respective partitions after you
> enable dynamic partitions
>
>
> On Fri, Jun 14, 2013 at 1:21 PM, Hamza Asad wrote:
>
>> i 'm executing fo
t do you mean by "partition column does not accepts
> to_date(event_date) form "
>
>
>
> On Fri, Jun 14, 2013 at 1:04 PM, Hamza Asad wrote:
>
>> sample row of my data is
>> *591269735,1,1022,2012-06-24
>> 11:08:10.9,null,2,null,null,null,null,null,null,null
at 12:27 PM, Nitin Pawar wrote:
> can you provide whats your data and what you want it to look like ?
>
>
> On Fri, Jun 14, 2013 at 12:31 PM, Hamza Asad wrote:
>
>> which UDF? it does not take to_date(event_date) column
>>
>>
>> On Fri, Jun 14, 2013 at
Please help me out. Am i doing something wrong? OR suggest me another
document which explains index implementation and its effective use
completely.
On Thu, Jun 13, 2013 at 3:12 PM, Hamza Asad wrote:
> I have created simple table as follow
> *CREATE TABLE events_details(
>
which UDF? it does not take to_date(event_date) column
On Fri, Jun 14, 2013 at 11:54 AM, Nitin Pawar wrote:
> use already existing UDFs to split or transform your values the way you
> want
>
>
> On Fri, Jun 14, 2013 at 12:09 PM, Hamza Asad wrote:
>
>> OIC. I got it.
tion(event_date) select col1, col2
> coln, event_date from old_table;
>
>
>
> On Thu, Jun 13, 2013 at 5:24 PM, Hamza Asad wrote:
>
>> when i browse it in browser, all the data is in *
>> event_date=__HIVE_DEFAULT_PARTITION__<http://10.0.0.14:50075/browseDirecto
*, rest of the files does not contains data
On Thu, Jun 13, 2013 at 4:52 PM, Nitin Pawar wrote:
> what do you mean when you say "it wont split correctly" ?
>
>
> On Thu, Jun 13, 2013 at 5:19 PM, Hamza Asad wrote:
>
>> what if i have data of more then 500 days then how
ions"
>
>
> On Thu, Jun 13, 2013 at 4:40 PM, Hamza Asad wrote:
>
>> now i created partition table like
>> *CREATE TABLE new_rc_partition_cluster_table(
>>
>> id int,
>> event_id int,
>> user_id BIGINT,
>>
>> intval_1 int ,
&g
o partitioned table created
> something like
> partitioned by (event_date string)
>
>
> On Wed, Jun 12, 2013 at 7:17 PM, Hamza Asad wrote:
>
>> i have created table after enabling dynamic partition. i partitioned it
>> on date but it is not splitting data datewise. Below
OUP BY
weekofyear(event_date) but its execution time have not optimized (same
770 sec as before). What am i doing wrong? Please help me out
--
*Muhammad Hamza Asad*
RITE TABLE rc_partition_cluster_table Partition (event_date)
SELECT * FROM events_details;
why it is not working fine?
--
*Muhammad Hamza Asad*
basically data resides in dfs folder and to repair hadoop, i have to remove
dfs folder. Now i have the data in dfs-backup folder but how can i access
it?
On Wed, Jun 12, 2013 at 1:29 PM, Hamza Asad wrote:
> I repaired my hadoop only, and my tables also shown in hive terminal but
>
know where your data are stored in HDFS, and you
> can recover it directely esle dont' change your hive metastore and repar
> your hadoop system ;)
>
> Matouk
>
>
> 2013/6/12 Hamza Asad
>
>> My hadoop crashes suddenly and not coming out from safe mode. i take
My hadoop crashes suddenly and not coming out from safe mode. i take back
up of my data, format it, and make my hadoop cluster to come out from safe
mode but now i have no table in hive-warehouse. How can i recover/transfer
hive data-warehouse data.
--
*Muhammad Hamza Asad*
tition.
>
> for exported data, you don't have to worry. it remains as it is
>
>
> On Tue, Jun 4, 2013 at 12:41 PM, Hamza Asad wrote:
>
>> No i don't want to change my queries. I want that my queries work on same
>> table and partition does not change its schema
l ?
>
>
> On Tue, Jun 4, 2013 at 11:37 AM, Hamza Asad wrote:
>
>> thats far more better :) ..
>> Please tell me few more things. Do i have to change my query if i create
>> table with partition on date? rest of the columns would be same as it is?
>> Also if i export t
ssages you would have seen that and could then have added to
> the discussion! :)
>
>
> On Mon, Jun 3, 2013 at 2:19 AM, Hamza Asad wrote:
>
>> Thanx for your response nitin. Anybody else have any better solution?
>>
>>
>> On Mon, Jun 3, 2013 at 1:27 PM, Nitin Pa
ant
> please wait for others to suggest you more options. this one is just mine
> and can be costly too
>
>
> On Mon, Jun 3, 2013 at 12:36 PM, Hamza Asad wrote:
>
>> no, its not partitioned by date.
>>
>>
>> On Mon, Jun 3, 2013 at 11:19 AM, Nitin Pawar wrote:
no, its not partitioned by date.
On Mon, Jun 3, 2013 at 11:19 AM, Nitin Pawar wrote:
> how is the data laid out?
> is it partitioned data by the date?
>
>
> On Mon, Jun 3, 2013 at 11:20 AM, Hamza Asad wrote:
>
>> Dear all,
>> How can i remove data of
Dear all,
How can i remove data of specific dates from HDFS using hive
query language?
--
*Muhammad Hamza Asad*
s
> Bejoy KS
>
> Sent from remote device, Please excuse typos
> --
> *From: * Hamza Asad
> *Date: *Thu, 21 Feb 2013 14:26:40 +0500
> *To: *
> *ReplyTo: * user@hive.apache.org
> *Subject: *Running Hive on multi node
>
> Does hive automaticall
d
> change the location where you want.
>
>
> hive.metastore.warehouse.dir
> /user/hive/warehouse
> location of default database for the warehouse
>
>
>
>
> On Wed, Feb 13, 2013 at 1:44 PM, Hamza Asad wrote:
>
>> Dear all, how can i change default dire
so that i still query existing data successfully
On Tue, Feb 12, 2013 at 10:08 PM, Hamza Asad wrote:
> Actually i have data in HDFS under HIVE/Warehouse path. I ran short of
> disk space so i'm changing HDFS location (in new HDD partition). so please
> tell me how can i transf
fication
>
> is it same hdfs cluster ?
> when you say migrating data from one location to another are you keeping
> the hive table metadata same?
> how much of capacity you have by disk wise?
>
>
>
> On Tue, Feb 12, 2013 at 10:20 PM, Hamza Asad wrote:
>
>> i want
41 matches
Mail list logo