While AWS RDS Oracle offers a robust relational database solution over the cloud, Databricks simplifies big data processing with features such as automated scheduling and optimized Spark clusters. Integrating data from AWS RDS Oracle to Databricks enables you to handle large volumes of data within a collaborative workspace to derive actionable insights in real-time. 

This article is a comprehensive guide for AWS RDS Oracle Databricks integration, where you will learn how to connect AWS RDS Oracle to Databricks using two different methods. It will also shed some light on the benefits of this integration. 

Overview of AWS RDS Oracle

AWS RDS Oracle is a fully managed database provided by AWS RDS. It is easy to set up and operate. With the AWS RDS service, you can simply focus on administrative tasks such as provisioning, monitoring, backups, software patching, etc. The AWS RDS Oracle makes production workloads easy with its Multi-AZ deployment options. Its built-in automated failover feature allows you to synchronize your primary database to a replicated secondary database in case of failure. 

Are you looking for an easy way to carry out AWS RDS Oracle to Databricks integration? Try Hevo Data‘s reliable, no-code, automated pipelines with 150+ connectors.
Get your free trial right away!

Overview of Databricks

Databricks is an open analytics unified platform for building, sharing, deploying, and maintaining organization data at scale. It is built on a lakehouse foundation that brings together the best elements of a data warehouse and a data lake to manage your data efficiently. Databricks’s intelligent data platform seamlessly and securely integrates with other cloud storage systems. It also provides a collaborative workspace for data engineers, scientists, and analytics professionals to work together on data pipelines, machine learning models, and transformation.

Methods for Migrating Data from AWS RDS Oracle to Databricks 

Let’s look at two methods that will teach you how to load AWS RDS Oracle to Databricks.

Method 1: Integrating Data from AWS RDS Oracle to Databricks Using Hevo

Hevo is an ELT data integration platform that simplifies data integration with its no-code real-time automated data pipeline. It provides 150+ data sources from where you can integrate your data into your desired destination. Hevo’s data pipeline is flexible to your work needs and can be set up quickly with minimal effort, enabling you to transfer AWS RDS Oracle data to Databricks.

Step 1: Configure Amazon RDS Oracle as Your Source

Prerequisites

Follow the steps to configure your source: 

  • Click on PIPELINES in the Navigation Bar. 
  • Click +CREATE 
  • In the Select Source Type page, select Amazon RDS Oracle
  • Provide the mandatory details on Configure Amazon RDS Oracle as your source page.
AWS RDS Oracle to Databricks: Configure Source Settings
   AWS RDS Oracle to Databricks: Configure Source Settings

For more information, refer to the Hevo documentation on configuring Amazon RDS Oracle as your source.

Step 2: Configure Databricks as Your Destination

Prerequisites 
  • You must have an active Azure, AWS, or GCP account. 
  • Create a Databricks workspace on your cloud storage account. 
  • The URL for your Databricks workspace must be in this format – https://<deployment name>.cloud.databricks.com.
  • To connect to your Database workspace, you must: 
  1. Create a Database cluster or an SQL workspace
  2. Port name, database hostname, and HTTP Path must be available.
  3. Personal Access Token (PAT) must be available.
  • To create the destination in the Hevo data pipeline, you must be assigned the role of any administrator except the Billing Administrator.

You can configure Databricks as your destination using the recommended method: Databricks Partner Connector. Follow the steps below to configure your destination:

  • Click on DESTINATIONS in the Navigation Bar. 
  • Click +CREATE in Destination View Lists. 
  • On the Add Destination page, search and select Databricks as your destination. 
  • On the Configure Databricks as your Destination page, specify the mandatory details: 
  AWS RDS Oracle to Databricks: Configure Destination Settings
 Configure Destination Settings  

For more information, refer to the Hevo documentation on configuring Databricks as the destination. 

Benefits of Using Hevo for AWS RDS Oracle Databricks Integration

  • Automated Data Mapping: Hevo’s automated schema mapping feature learns and replicates your source data’s schema into your destination without having to do it manually. 
  • Data Transformation: Hevo uses various data transformation techniques, such as drag-and-drop and Python-based transformations. These techniques help you clean and prepare your data for analysis. 
  • Incremental Data Loading: Hevo enables you to transfer your modified source data in real-time to your destination, optimizing the use of bandwidth at both ends.

Visit our Website to Explore Hevo

Interested in reading about how to transfer data from Amazon Cloud Services to Databricks? You can refer to this article on how to replicate data from Amazon RDS to Databricks.

Method 2: Exporting Data from AWS RDS Oracle to Databricks File Using a S3 Bucket 

This method will teach you how to load AWS RDS Oracle file to Databricks table using an S3 Bucket.

Step 1: Transferring File from AWS RDS Oracle Instance to S3 Bucket

You can use the Amazon RDS package rdsadmin_s3_tasks to transfer files between your Oracle DB instance and S3.

Prerequisites
  • Your Amazon DB instance and Amazon S3 Bucket should be in the same AWS region. 
  • Configure IAM permissions to integrate Oracle instance with S3 Bucket.
  • To integrate the RDS Oracle instance into the S3 Bucket, you must associate your DB instance with a group that includes the S3_INTEGRATION option.
  • You must upload files from a user-created directory object or data pump directory; it can’t be used as a directory for backup processes. 
  • Before transferring the files to an S3 Bucket, you must go through the requirements for file transfer

You can upload files from an Oracle DB instance to S3 Bucket in a compressed GZIP file and decompress it when unloading the data. In this example, you will learn how to upload files from the directory that are not compressed. 

  • You must use the rdsadmin.rdsadmin_s3_tasks.upload_to_s3 procedure to upload files from an Oracle DB instance directory into an S3 Bucket. It will return a value in the Task ID.
  • Execute the following command to upload files from the DATA_PUMP_DIR directory to an S3 Bucket named ‘mys3bucket’. 
SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3(

      p_bucket_name    =>  'mys3bucket',

      p_prefix         =>  '', 

      p_s3_prefix      =>  '', 

      p_directory_name =>  'DATA_PUMP_DIR') 

   AS TASK_ID FROM DUAL;
  • The SELECT command will return an ID of the task (Replace ‘task-id’ with ID returned by procedure) in the varchar2 data type. You can view the output by displaying the task file using the following command: 
SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-task-id.log'));

Step 2: Loading Data from S3 Bucket to Databricks SQL Warehouse Using COPY_INTO Command

Prerequisites
  • To access and read from S3 Bucket, you need to configure data access for ingestion
  • You need to enable Can manage permission for Databricks warehouse. 
  • The fully qualified S3 URI must be available. 
  • You must be familiar with the Databricks SQL interface. 

Follow the steps below to transfer AWS RDS Oracle data to Databricks.

Verify Your Access to Cloud Storage
  • On your Databricks sidebar menu, select Click>Query. 
  • Inside the SQL editor menu, select the SQL warehouse and paste the code below: 

select * from csv.<path> , where you replace ‘path’ with S3 URI. 

  • Click on Run.
Create a Table in Your Databricks Workspace
  • In the SQL editor, paste and execute the following code:
CREATE TABLE <catalog_name>.<schema_name>.<table_name> (

  tpep_pickup_datetime  TIMESTAMP,

  tpep_dropoff_datetime TIMESTAMP,

  trip_distance DOUBLE,

  fare_amount DOUBLE,

  pickup_zip INT,

  dropoff_zip INT

);
Load Data from the S3 Bucket to the Databricks Table
  • In the side menu, click on Click>Query. 
  • In the editor pane, select the SQL warehouse that is running. 
  • Paste the code in the editor and replace <s3 bucket> with your S3 bucket name and <folder> with the name of your S3 folder and execute the query.
COPY INTO <catalog-name>.<schema-name>.<table-name>

FROM 's3://<s3-bucket>/<folder>/'

FILEFORMAT = CSV

FORMAT_OPTIONS (

  'header' = 'true',

  'inferSchema' = 'true'

)

COPY_OPTIONS (

   'mergeSchema' = 'true'

);

SELECT * FROM <catalog_name>.<schema_name>.<table_name>;
  • This query will load your data into the Databricks table you created. 

Limitations for Exporting Data from AWS RDS Oracle to Databricks File Using a S3 Bucket 

  • The AWS RDS for Oracle can’t access data in Amazon S3 Bucket encrypted with SSE-C, as the Oracle database does not natively support server-side encryption using customer-provided keys. 
  • After integration, you need to perform some time-consuming extra tasks. These include monitoring transfer in the S3 Bucket and deleting and cleaning data from the temporary storage.

Use Cases of AWS RDS Oracle into Databricks

  • Databricks helps you analyze your Oracle instance data and generate data-driven insights for training programming, employee retention, or organizational development decisions. 
  • Through Databricks analytics capabilities, you can also analyze your social media data, identify emerging trends, and enhance your customer engagement.
  • Databrick also provides a unified platform, allowing you to collaborate seamlessly with your teams on data projects.

Why Integrate Data from AWS RDS Oracle into Databricks?

  • Multi-Language Support: Integrating AWS RDS Oracle data to Databricks provides you with a scalable and unified platform that supports multiple languages, such as Python, SQL, R, and more. 
  • Advanced Analytics Capabilities: Databricks also has advanced analytics capabilities, such as machine learning (ML), enabling your team to develop and deploy ML models using data stored in your Oracle DB instance. 
  • AI-Generated Analytics:  Databricks facilitate AI-generated analytics that support deep learning frameworks and help you implement complex models like natural language processing or image recognition.

Conclusion

Migrating your AWS RDS Oracle data to Databricks can help you drive collaboration and enhance your organization’s overall productivity. You can also streamline data processing using various Databricks features, such as advanced analytics, machine learning, and AI. While you can use S3 Bucket as temporary storage for integration, an ELT platform like Hevo can simplify the data ingestion process with its automated data pipelines. 

FAQs (Frequently Asked Questions)

Q. How can you connect Databricks hosted on Azure with an RDS on AWS? 

You can connect Databricks hosted on Azure with an RDS on AWS by using JDBC or ODBC connectivity

Q. Can you connect Databricks to the AWS RDS instance without using the JDBC/ODBC driver? 

You can connect your Databricks application to an AWS RDS instance through an Amazon S3 Bucket or Hevo’s data pipeline. For more information, refer to Hevo documentation—AWS RDS to Databricks.

Want to take Hevo for a spin? Sign up for a 14-day free trial and experience the feature-rich Hevo suite first hand. You can also have a look at the unbeatable Hevo pricing that will help you choose the right plan for your business needs.

Visit our Website to Explore Hevo
Saloni Agarwal
Technical Content Writer, Hevo Data

Saloni is a technical content writer, who enjoys writing articles about ML, AI and cloud-based storage platforms. She has experience writing about data integration, and different modern technologies related to this.

All your customer data in one place.