Tuesday, August 19, 2025

How to use Oracle SQL Developer extension in Visual Studio Code

In today’s fast-paced development environment, productivity often depends on having the right tools in a single, seamless workspace. For database developers and administrators working with Oracle, switching between multiple applications to write queries, manage schemas, and debug can feel both time-consuming and disruptive. That’s where Visual Studio Code (VS Code), with its lightweight design and rich extension ecosystem, becomes a powerful ally.

The Oracle SQL Developer extension for Visual Studio Code (VS Code) is part of the Oracle Developer Tools for Visual Studio Code. It provides functionality for working with Oracle databases, similar to what one would expect from Oracle SQL Developer, but directly within VS Code.


Let's see how you can set up and use the Oracle SQL Developer extension from within Visual Studio Code.


- Click on Extensions option in the left pane



- This will bring Extensions Marketplace. Let's search for 'oracle sql developer' here.



- Click on the search results and it should open a new tab on the right side for 'Oracle SQL Developer Extension for VSCode'




- Let's click Install to install the extension.

- Installation usually takes less than a minute. Once installed, we should be able to see SQL Developer option in the left pane.



- Once we click on it, we should be able to see SQL Developer Navigation pane where we will be given an option to create a new database connection. Click on 'Create Connection'




- Let's create a new database connection. I'm going to connect to my ADW database that has been provisioned in OCI.

Enter Connection Name, Username/Password, Connection Type is going to be Cloud Wallet in my use case, but it can be set to Basic by providing the Host, Port etc. details as well.
I'm going to select the Wallet file I obtained from my ADW console in OCI.
In the end, we will Test the connection and Save it.


 
- Once done, we will see our connection in left pane. Let's right click and Connect.



- Once connected, we can expand various sections such as Tables, Views, Packages etc. to see all our existing database components. This is very similar to what we see in the SQL Developer tool.



- Let's open the Invoice table and we'll be able to see the details in a new tab on the right side. We can navigate through various sections such as Columns, Data, Grants, Indexes etc. very similar to how we can access them in SQL Developer tool.



- Let's go to data tab and insert a new record and fill in the values in respective columns. As we can see here, it automatically builds an Insert statement using our data and issues a commit implicitly.



- We can utilize additional tabs such as Dependencies where we will be able to see the dependent objects which are using the Invoice table. As we can see Invoice_Details_V view references the Invoice table, hence it's listed here.



- Similarly, we can access various code units such as Packages, Procedures, Functions etc. from the left pane. Here, we can see it shows us Package definition as well as Package Body details along with the sub-procedures under the packages.




- If we click on one of the procedures from within the Package then it will open the source code on the right side.



- Let's modify the sample code to add a test line and click Compile. This should recompile the package with our changes.



- Now, let's see how we can access the SQL CL terminal from within the VS Code.

- Navigate to Terminal tab. By default, it should show the PowerShell prompt like this.




- Let's right click on our database connection (ADW in this case) and select Open SQLcl option from the dropdown.



- It should connect to the database server in a new session and present us the good old SQL prompt.




- This is our typical SQL prompt where we can issue any SQL commands the way we would normally do on the traditional SQL prompt.



Conclusion:

As we can see, with the Oracle SQL Developer extension, we can work on our Oracle database directly from within VS Code IDE, without switching to a separate tool like Oracle SQL Developer or any other database management tool.

We get all the features of VS Code, such as syntax highlighting, code formatting, and version control integration, along with Oracle-specific features that provide us SQL, PLSQL support, Database exploration and management and rich debugging and testing capabilities.

By using the Oracle SQL Developer extension in Visual Studio Code, developers can significantly improve productivity and streamline their Oracle database development process, all within a single, powerful, and lightweight editor.


Share:

Monday, August 11, 2025

How to enable OCI Object Storage Buckets over SFTP using S3 Compatibility API and SFTPGo

Managing file transfers securely and efficiently is a critical requirement for many organizations, especially when integrating cloud storage into existing workflows. When it comes to Oracle Cloud Infrastructure (OCI), the Object Storage service is widely used for its durability, scalability. It's a internet-scale, high-performance storage platform offered by OCI which is scalable, flexible and offers greater data durability and resiliency.

However, many applications and legacy systems still rely on SFTP for file transfers. In such cases it gets tricky to get the data files transferred between such legacy systems and the OCI cloud platform.

By combining OCI Object Storage with S3-compatible endpoints and a lightweight, open-source tool like SFTPGo, we can seamlessly enable SFTP access to our OCI Object Storage buckets without restructuring existing processes. 

This article will explain step by step process on how to configure OCI Object Storage over SFTP using the S3 Compatibility API and SFTPGo, enabling secure, familiar file transfer workflows in a modern cloud environment.


High level steps
:

1. Create Bucket and Enable S3 compatible API

2. Generate Client Secret keys

3. Provision SFTPGo on OCI instance

4. SFTPGo configurations

5. Test the OCI bucket connectivity over SFTP


Let's dive in.


1. Create Bucket and Enable S3 compatible API

- Navigate to the Tenancy details 



- Once on Tenancy details page, note down the Compartment listed under Amazon S3 Compatibility API designated compartment as well as the Namespace of the Object Storage



If you wish to change the Amazon S3 compatible compartment, then you can do so by clicking 'Edit object storage settings' button and changing the S3 compatible compartment





- Let's create a new bucket in the above mentioned Compartment, as shown below. Let's name it sftpbucket



- Now that we've created a bucket under S3 compatible compartment, let's move on to step 2.


2. Generate Client Secret keys

- Navigate to profile and go to the User details page



- Go to Token and Keys section



- Scroll down to Customer Secret Keys section and click Generate secret key



- Give a relevant name to the key like sftogo and click Generate

- At this stage, it will generate the key and also show you the Secret.

Make sure to note the Secret down as it won't be displayed again after this stage.

- Also, note down the newly generated Key 




3. Provision SFTPGo on OCI instance


- Navigate to Compute -> Instances



- Create a new instance (or you can use your existing instance)

- If you are creating a new instance then below references may help:


Placement: AD 1

Image: Oracle Linux 9

Shape: VM.Standard.A1.Flex

Shape build: Virtual machine, 1 core OCPU, 6 GB memory, 1 Gbps network bandwidth

- While provisioning new instance, you'll be given an option to generate Private-Public key pair. Download the Private key and store it on your machine.

- Once you enter the instance, navigate to Instance access section and note down the Public IP Address and username (usually opc)



- Open command prompt on your machine

- Run below command

ssh -i <private key file path on your machine> opc@<public IP address)


- This will let us in the new instance, and you should see the shell prompt similar to this



- Now let's run below commands in the given sequence:


Create the SFTPGo repository:

ARCH=`uname -m`

curl -sS https://ftp.osuosl.org/pub/sftpgo/yum/${ARCH}/sftpgo.repo | sudo tee /etc/yum.repos.d/sftpgo.repo


Reload the package database and install SFTPGo:

sudo yum update

sudo yum install sftpgo


- At this stage we are done installing SFTPGo on our instance.

- Now, let's start the SFTPGo service and enable it to start at system boot:


sudo systemctl start sftpgo

sudo systemctl enable sftpgo

At this stage, we've started the SFTPGo server on our OCI instance.


SFTPGo server runs on the Public IP address of our instance and at the port 8080


Let's make sure we open this port in our firewall config.


Run this command to see the current firewall config

sudo firewall-cmd --list-all


Run below commands to add the port 8080 to firewall config and reload it

sudo firewall-cmd --add-port=<port_number>/tcp --permanent

sudo firewall-cmd --reload


At this stage we have completed all the configurations in installting and enabling SFTPGo on our OCI instance.



4. SFTPGo configurations


- SFTPGo WebUI is accessible at this URL: http://<Public IP of your Instance>:8080/web/admin


- Let's go to this URL in Chrome browser


- We'll be presented to configure Admin account upon accessing this URL first time


- Setup the Admin user name and Password and Save.


- Let's login to WebAdmin as the admin





- Once we login as Admin, we should see below page:





- Let's create a sftp user that will be used to access our Object Storage bucket


- Click Add


- Let's name our user sftpgo_user and provide a strong password



- Scroll down to File system section and fill-in below details:


Storage: S3 (Compatible)


Bucket: Bucket name from OCI Object Storage obtained from Step 1


Region: Your OCI Region


Access Key and Access Secret: Obtained from Step 2





Optionally, you can mention Key Prefix if you want to restrict access to a particular folder in the bucket and not provide access to the whole bucket.

For example if your bucket has a folder named SFTPFiles and you want SFTP User to have access only to this folder then you can mention SFTPFiles/ in the Key Prefix.


Now, we need to enter S3 compatible endpoint of our namespace in which the bucket resides.

The syntax for compatible endpoint URL is:


https://{object-storage-namespace}.compat.objectstorage.{region}.oraclecloud.com



So your endpoint URL will look similar to this -


- Click Save to create the SFTP user


- Once created, the Admin can see the user and it's status on the Users page




- At this stage we've completed all the SFTPGO configurations



5. Test the OCI bucket connectivity over SFTP


- Let's login to WebClient as the sftpgo_user




-
On the Files page, we can either create a New Folder or Upload a new File





- Create a new folder names test





- Let's upload a test file as well



We will be able to see the new folder and the newly uploaded file on SFTPGo Web Client






- Now, let's navigate back to the OCI Object Storage bucket and see if these new objects are available there





As we can see, both the folder and the file are available on our bucket -







Conclusion:


By combining the power of OCI Object Storage with the flexibility of SFTPGo and the S3 Compatibility API, organizations can modernize their file transfer infrastructure without disrupting existing processes. This solution offers a secure, cost-effective, and scalable way to support SFTP workflows while taking full advantage of cloud storage. Whether you're migrating legacy systems or building hybrid environments, this approach bridges the gap between traditional file transfer needs and modern cloud capabilities.


Share:

Friday, August 1, 2025

How to utilize OCI Event Notifications to monitor OCI Data Integration (OCI-DI) task runs

In the previous post, we learned about the OCI Data Integration tool and it's core components like Data flows, Pipelines and Tasks etc.

As we saw, OCI-DI data integrations and the pipelines are executed through various Tasks. Now the question arises that what kind of mechanism we can put in place to monitor these run and automatically send notifications to the support team ?

Typically, OCI-DI Task runs result in generating OCI Events when tasks get completed (Success or Fail). Data Integration integrates with OCI public logging and monitoring for visibility by the users.

Let's take a look into the step by step process in setting up notifications on OCI events to monitor and alert on OCI-DI task tuns.


- Navigate to Developer Services and locate Notifications under Application Integration section





- We'll create a Notification topic here. Click Create Topic and let's name it OCI_DI_Notifications




- Now, we'll create a Subscription to this topic. Open the newly created topic and click Create Subscription.



- We can create subscriptions using a variety of protocols:

Email: Associates with an email address as input.
OCI Functions: Associates with a custom Function created in OCI
HTTPS: Associates with a custom HTTPS URL
PagerDuty: Associates with PagerDuty Events URL along with the Integration Key
Slack: Associates with Slack Webhook URL and Token
SMS: Associates with a Cell phone number with its Country Code



- For our use case, we'll use Email Protocol and use the desired Email Address where we want to receive the Alerts/Notifications.

- Once a Subscription is created, we'll see it's in Pending status



- This is because an email will be sent to the recipient and they need to respond to it to confirm that they wish to receive the Notifications from this subscription.

- The email will look like this -



- The recipient clicks Confirm Subscription and they should see this page



- Once the recipient confirms, the subscription will become Active in our OCI Topic.



- At this stage, we need to navigate to Observability and Management, go to OCI Events Service and click Rules.



- We'll be setting up the new OCI Event Rule here.

- Let's click Create Rule. I'll name it OCI_DI Notification Rule

- On the Rule Creation page, we'll see below options:

Condition:

- Event Type: Describes the type of OCI Event
- Attribute: Attribute details of various OCI-DI components such as Applications, Tasks etc.
- Filter Tag: OCI tags to be used as filters, if any.


- We'll use Event Type for our use case.

- Service Name: OCI lists all the available services here. We'll use Data Integration service for our monitoring purpose.



- Event Type: This lists all the possible OCI Events pertaining to a variety of OCI-DI operations at Workspace level and Task level.

- We'll select Task - Begin and Task - End so that we can receive a notification whenever a OCI-DI Task Starts and another one when it ends.




- In the View Example Events page, we can inspect what all information will be included in the Notification in JSON format and we can modify this as per our needs, if desired.



- Click Save Changes to save the Rule.


- At this stage, we've completed the Rule and Subscription for monitoring of OCI-DI Tasks runs. Let's test it.


- Navigate to Data Integration under Analytics and AI




- Navigate to the desired workspace and application.

- Let's locate the desired task and Run it



- The task will execute and complete in some time.

- We should receive two Email alerts, one when the Task run was Started and another one when Task run was Ended.


Notification on Task Start:




Notification on Task End:





Share: