Generative AI Engineering
September 25, 2023
These course chapters have been adapted from the Web Age course WA3309: Generative AI Training to introduce you to the basics of Generative Artificial Intelligence (AI), its applications, and the techniques used to develop and engineer these systems. Contact us for the full version of this live, hands-on course taught by an expert instructor.Introduction to Generative AI
Spark and ML at Scale
August 22, 2023
These course chapters have been adapted from the Web Age course WA3290: Spark and Machine Learning at Scale Training to introduce you to the world of Spark, a powerful open-source big data processing engine used to create scalable machine learning solutions.
Contact us for the full version of this live, hands-on course taught by an expert instructor.
Chapter 8 – Introduction to Machine L
Querying Data in Snowflake
May 3, 2021
In this tutorial, you will learn how to create and run queries in Snowflake. You will also learn about query profiles and views. We will also review the new Snowsight experience UI.
According to Snowflake’s documentation, “Snowflake supports standard SQL, including a subset of ANSI SQL:1999 and the SQL:2003 analytic extensions. Snowflake also supports common variations for a number of commands where those variations do not co
Creating and Working with Databases in Snowflake
May 3, 2021
This tutorial is adapted from the Web Age course Snowflake Cloud Data Platform from the Ground Up.
In this tutorial, you will learn how to create databases, tables, and warehouses in the Snowflake Web UI.
The Snowflake Web UI
April 29, 2021
This tutorial is adapted from Web Age course Snowflake Cloud Data Platform from the Ground Up Training.
In this tutorial, you will familiarize yourself with the Snowflake Web UI (a.k.a Web Portal, Snowflake Manager, and Snowflake Console).
Searching with Apache Solr
April 27, 2021
This tutorial is adapted from the Web Age course Apache Solr for Data Engineers.Part 1. Solr Sets
Solr has many capabilities when it comes to searching, of course, this is dependent on the data that is being utilized in the set. Like SQL, it is very important to know and understand the data sets prior to running high-level queries on them.
Let’s work agai
How to Repair and Normalize Data with Pandas?
April 5, 2021
This tutorial is adapted from Web Age course Data Engineering Bootcamp Training Using Python and PySpark.
When you embark on a new data engineering/data science/machine learning project, right off the bat you may be faced with defects in your input dataset, including but not
Functional Programming in Python
November 21, 2020
This tutorial is adapted from the Web Age course Introduction to Python Programming.1.1 What is Functional Programming?
Functional programming reduces problems to a set of function calls.
The functions used, referred to as Pure functions, follow these rules:Only produce a resultDo not modify the par
What is Splunk?
October 30, 2020
This tutorial is adapted from the Web Age course Operational Data Analytics with Splunk.1.1 Splunk Defined
Google for log files is how Splunk creators position it. Splunk is a data-centric platform that offers data practitioners capabilities for data collection, a
How to do Data Grouping and Aggregation with Pandas?
October 30, 2020
This tutorial is adapated from the Web Age course Data Engineering Bootcamp Training (Using Python and PySpark).1.1 Data Aggregation and Grouping
The pandas module offers functions that closely emulate SQL functionality for data grouping, aggregation, and filtering. <
Comparing Hive with Spark SQL
March 9, 2020
This tutorial is adapted from Web Age course Hadoop Programming on the Cloudera Platform.
In this tutorial, you will work through two functionally equivalent examples / demos – one written in Hive (v. 1.1) and the other written using PySpark API for the Spark SQL module (v. 1.6) – to see the dif
Data Visualization with matplotlib and seaborn in Python
March 4, 2020
This tutorial is adapted from Web Age course Advanced Data Analytics with Pyspark.1.1 Data Visualization
The common wisdom states that ‘Seeing is believing and a picture is worth a thousand words’. Data visualization techniques help users understand the data, underlying trends and patterns by displaying it in a variety of graphi
Python Modules and Code Reuse
February 27, 2020
This tutorial is adapted from Web Age course Introduction to Python Development.1.1 Code Organization in Python
Several organizational terms are used when referring to Python code:
Module – A file with some Python code in it
Library – A collection of modules
Package – A director
Data Science and ML Algorithms with PySpark
December 11, 2019
This tutorial is adapted from Web Age course Practical Machine Learning with Apache Spark.8.1 Types of Machine Learning
There are three main types of machine learning (ML), unsupervised learning, supervised learning, and reinforcement learning. We will be learning only about the unsupervised and supervised learning types.8.2 Supervised vs U
Distributed Computing Concepts for Data Engineers
November 15, 2019
1.1 The Traditional Client–Server Processing Pattern
It is good for small-to-medium data set sizes. Fetching 1TB worth of data might take longer than 1 hour.
What is Data Engineering?
November 15, 2019
1.1 Data is King
Data is king and it outlives applications. Applications outlive integrations. Organizations striving to become data-driven need to institute efficient, intelligent, and robust ways for data processing. Data engineering addresses many of the aspects of this process.1.2 Translating Data into Operational and Business Insights
PySpark Shell
October 17, 2019
1.1 What is Spark Shell?
The Spark Shell offers interactive command-line environments forScala and Pythonusers. SparkR Shell has only been thoroughly tested to work with Spark standalone so far and not all Hadoop distros available, and therefore is not covered here. The
Introduction to PySpark
October 16, 2019
1.1 What is Apache Spark?
Apache Spark (Spark) is a general-purpose processing system forlarge- scaledata. Spark is effective for data processing of up to 100s of terabytes on
How to Create Python Environment in Cloud 9?
October 1, 2019
In this tutorial, you will set up an AWS Cloud9 development environment and run a Python script.
Featured Upcoming
Free Webinars!
Join our community of 80,000 IT professionals by register
Python for Data Science
July 25, 2019
This tutorial is adapted from Web Age course Applied Data Science with Python.
This tutorial provides quick overview of Python modules and high-power features, NumPy library, pandas library, SciPy library, scikit-learn library, Jupyter notebooks and Anaconda distribution.
Python with NumPy and pandas
July 24, 2019
This tutorial is adapted from Web Age course Applied Data Science with Python.
This tutorial aims at helping you refresh your knowledge of Python and show how Python integrates with NumPy and pandas libraries.Part 1 – Set up the Environment
Using k-means Machine Learning Algorithm with Apache Spark and R
January 31, 2017
In this post, I will demonstrate the usage of the k-means clustering algorithm in R and in Apache Spark.Apache Spark (hereinafter Spark) offers two implementations of k-means algorithm: one is packaged with its MLlib library; the other one exists in Spark’s spark.ml package. While both implementations are currently more or less functionally equivalent, the Spark ML team recommends using the
Spark RDD Performance Improvement Techniques (Post 2 of 2)
October 4, 2016
In this post we will review the more important aspects related to RDD checkpointing. We will continue working on the over500 RDD we created in the previous post on caching.
You will remember that checkpointing is a process of truncating an RDD’s lineage graph and saving its materi
SparkR on CDH and HDP
September 13, 2016
Spark added support for R back in version 1.4.1. and you can use it in Spark Standalone mode.
Big Hadoop distros, like Cloudera’s CDH and Hortonworks’ HDP that bundle Spark, have varying degree of support for R. For the time being, CDH decided to opt out of supporting R (their latest CDH 5.8.x version does not even have sparkR binaries), while HDP (versions 2.3.2, 2.4, … ) includes SparkR as a technical preview technology and bundles some R-related components, like the sparkR script. Making it all
Spark RDD Performance Improvement Techniques (Post 1 of 2)
September 13, 2016
Spark offers developers two simple and quite efficient techniques to improve RDD performance and operations against them: caching and checkpointing.
Caching allows you to save a materialized RDD in memory, which greatly improves iterative or multi-pass operations that need to traverse the same data set over and over again (e.g. in machine learning algorithms.)
Simple Algorithms for Effective Data Processing in Java
January 30, 2016
The needs of Big Data processing require specific tools which nowadays are, in many cases, represented by the Hadoop product ecosystem.
When I speak to people who work with Hadoop, they say that their deployments are usually pretty modest: about 20 machines, give or take. It may account for the fact that most companies are still in the technology adoption phase evaluating this Big Data platform and with time the number of machines in their Hadoop clusters would probably grow into 3- or even 4-di