Title:  Data Analytics Engineer/ Big Data Engineer immediately.

Work Location  :  Atlanta, GA

Experience :  Min 9 – 12 years.

Start Date: ASAP

Visa: Open

Priority: High

Level: Senior

Interviews: Telephonic

Duration: Long-term

Rate: Open

Pay-terms: C2C

Job type: Contract



REQUIRED EXPERIENCE:

5 years of hands on experience in Hadoop, HDFS, Map Reduce and Hadoop
Ecosystem

Good experience in system monitoring, development and support related
activities for Hadoop and Java/J2EE Technologies.

Experienced as Big Data Engineer with deep understanding of the Hadoop
Distributed File System and Eco System (HDFS, Map Reduce, Hive, Sqoop,
Oozie, Zookeeper, HBase, Flume, PIG, Apache Kafka) in a range of industries
such as Retail and Communication sectors.

Experience in importing and exporting data from RDBMS to HDFS, Hive tables
and HBase by using Sqoop.

Experience in importing streaming data into HDFS using flume sources, flume
interceptors and flume sinks.

Experience in implementing complex analytical algorithms using Map reduce
design patterns.

Good Expertise working with different varieties of data including semi/un -
structured data using Map reduce programs.

Experienced in optimization techniques in sorting and phase of Map reduce
programs, and implemented optimized joins that will join data from
different data sources.

Hands on Experience in performing analytics on structured data in hive with
Hive queries, Views, Partitioning, Bucketing and UDF’s using HiveQL.

Experience in performance tuning of HIVE queries and Java Map Reduce
programs for scalability and faster execution.

Hands on experience in writing Map Reduce jobs on Hadoop Ecosystem using
Pig Latin and creating Pig scripts to carry out essential data operations
and tasks.

Worked with different file formats like JSON, XML, Avro data files and text
files.

Used PIG Latin Scripts, join operations, custom user defined functions
(UDF) to perform ETL operations.

Excellent understanding and knowledge of NOSQL databases like HBase,
Cassandra.

Hands on experience in creating Apache Spark RDD transformations on Data
sets in the Hadoop data lake.

Experience in designing and development of technical architecture,
requirements and statistical models.

Good knowledge for designing and implementation of analytics solutions as
per project proposals.

Prepared scripts to ensure proper data access, manipulation and reporting
functions with R- Python or other programming languages.

Formulated procedures for integration of R-Python programming plans with
data sources and delivery systems.

Provided technical assistance for development and execution of test plans
and cases as per client requirements.

Supported technical team members in development of automated processes for
data extraction and analysis.

Participated in learning of techniques for statistical analysis projects,
algorithms and new methods.

Prepared detailed technical documentation such as workflows, scripts and
diagrams in coordination with research scientists.

Used Apache Oozie to combine multiple jobs for Map Reduce, Hive, Pig, Sqoop
into one logical unit of work.

Experience in working with Hadoop in Stand-alone, pseudo and distributed
modes.

Good Knowledge on Cloud Computing with Amazon Web Services like EC2, S3
which provides fast and efficient processing of Big Data

Experienced with using different kind of compression techniques to save
data and optimize data transfer over network using Lzo, Snappy, etc.

Thorough understanding of Software Development Life Cycle (SDLC), Software
Test Life Cycle (STLC) and processes across multiple environments and
platforms.

Hands on experience in database design using PL/SQL to write Stored
Procedures, Functions, Triggers and strong experience in writing complex
queries, using Oracle, DB2, SQL Server and MySQL.

Creating required table spaces, users, roles and Granting/Revoking
privileges to users.

Experience architecting highly scalable, distributed systems using
different open source tools as well as designing and optimizing large,
multi-terabyte data warehouses.

Experience in integrating state-of-the-art Big Data technologies into the
overall architecture and lead a team of developers through the
construction, testing and implementation phase.

Experience with modern version control systems like Git or SVN

Ability to transform complex business requirements into technical
specifications.

Used Maven extensively for building jar files of Map Reduce programs and
J2EE applications.

Used Agile methodology to work with IT and business to progress efficient
system development

Resourceful and creative, with a high adaptability to change. Enjoy new
challenges and able to learn new skills quickly.

Experienced in all facets of Software Development Life Cycle (Analysis,
Design, Development, Testing and maintenance) using Waterfall and Agile
methodologies.

-- 
You received this message because you are subscribed to the Google Groups "Visa 
Transfer OPT, CPT, PT, H1,H4" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/paul_talluri.
For more options, visit https://groups.google.com/d/optout.

Reply via email to