In the previous article we looked at MCP with Oracle SQLcl and analysed data from an Oracle 26ai autonomous database. Now in the Oracle Analytics Cloud November 2025 update, an MCP server is also available, and we will look at this in action.
Mike Durran and Ravi Bhuma wrote a great article on setting up MCP with Oracle Analytics which requires just an OAC instance, a suitable desktop ( like Claude, Cline or CoPilot for example ) running on Windows, macOS, or Linux, plus Node.js version 18+.
If you’re thinking, wait a minute, Oracle OAC already has an AI assistant built in, so why would I need another one?, then that’s pretty simple. The AI Assistant runs inside OAC whereas the OAC MCP server allows external AIs to access OAC’s subject areas and datasets and can then be even used in Agentic workflows.
There are ( currently ) 3 data tools available described in the current OAC documentation :
Discover (oracle_analytics-discoverData): Lists available datasets and subject areas, helping AI clients identify what data exists.
- Describe (
oracle_analytics-describeData): Retrieves dataset metadata (tables, columns, measures, and hierarchies) available for logical SQL and natural language queries which aids the LLM with contextual information. - Execute (
oracle_analytics-executeLogicalSQL): Runs governed logical SQL queries generated by oracle_analytics-generateLogicalSQL, applying OAC’s role-based security, filters, and caching..
Let’s set it up. You can follow along with the instructions in the blog linked to above so I won’t repeat them here, however it is worth looking at the new MCP Connect which appears in the profile setting in the November 2025 release of OAC as that isn’t screenshotted in the Oracle blog. This gives us some instructions as well about installing Node.js and as further enhancements are made then things could change, so it’s worth looking at what is written here.

When you download the tool, it has an excellent README.md file
When I created an new installation of Claude, I didn’t have a claude_desktop_config.json file until I pressed the “Edit Config” which then created it

So I could now put the MCP call from OAC we saw earlier in the claude_desktop_config.json file. I also chose to use the OAC token for connectivity.

Restarting Claude connects my OAC MCP server and looking at the tools I can see the ( and enable / disable ) the 3 tools that come with it.

Let’s ask some questions of the data, and note I am using the anthropic Sonnet 4.5 LLM but of course you can select others – I am still to try the other Anthropic ones (Opus for “complex work” and Haiku for “quick answers”).
I have two data sets of office locations, where each office has a performance star rating for the year in 2024 and the other dataset is the same, but the results from the latest survey in 2025. In my data set for 2025, I kept the results the same as for 2024, apart from I changed 2 of the records to swop their ratings, so an office that was highly rated suddenly dropped to the worst rating and the other went from the worst to the best.
We can ask the LLM to do that work for us – see what it can spot between those datasets and then perhaps we can ask it some further questions.
Firstly I will check that the client can see the datasets ( and here it uses the OA: Discover Data ) and it makes it very clear that it is going to use that tool and then asks for my permission to continue.

The results of this are that I get confirmation that the dataset can be seen. It can now use OA: Describe Data to look at the dataset structures (so it can tell they are suitable for comparison) and use OA: Execute Logical SQL to query the data.

Let’s have a look at some of that Logical SQL generated
Request
{
`query`: `SELECT XSA('*****@wegobeyond.co.uk'.'Office_Ratings_2024').\"All_Office_Ratings_2024\".\"OFFICE_LOCATION\" AS llm_0, XSA('*****@wegobeyond.co.uk'.'Office_Ratings_2024').\"All_Office_Ratings_2024\".\"SUPPLIER_RATING\" AS llm_1 FROM XSA('*****@wegobeyond.co.uk'.'Office_Ratings_2024') ORDER BY llm_0 FETCH FIRST 1000 ROWS ONLY`,
`maxRows`: 1000
}
Response
{"metadata":{"requestId":"9d63498e-f7b4-47e9-a2e2-960c67c33c64","batchSize":1000,"startTime":1764172930388},"batches":[{"complete":true,"cursorId":"a8549ff6-812c-4ad8-8b1d-5d65d09c8ca7","batchNumber":1,"data":[{"llm_0":"Aberystwyth","llm_1":5.0},{"llm_0":"Bishops Storford","llm_1":4.0},{"llm_0":"Bridlington","llm_1":3.0},{"llm_0":"Durham","llm_1":4.0},{"llm_0":"Falkirk","llm_1":5.0},{"llm_0":"Glasgow","llm_1":4.0},{"llm_0":"Grindale","llm_1":5.0},{"llm_0":"Haselmere","llm_1":2.0},{"llm_0":"Inverness","llm_1":4.0},{"llm_0":"Llandovery","llm_1":5.0},{"llm_0":"Llanidloes","llm_1":3.0},{"llm_0":"Manchester","llm_1":4.0},{"llm_0":"Newark","llm_1":3.0},{"llm_0":"Pembroke","llm_1":5.0},{"llm_0":"Staines","llm_1":1.0},{"llm_0":"Wigton","llm_1":5.0}],"hasMore":false,"totalRecordsStreamed":16,"isComplete":true,"recordsInBatch":16,"timestamp":1764172930398}],"status":{"result":"success","error":false,"message":"Streaming completed successfully"},"summary":{"completed":true,"totalBatches":1,"totalRecords":16,"endTime":1764172930403,"durationMs":15}}
So we can see exactly the commands that have been run for the LLM to ascertain the data needed to do the analysis. The output created was as follows :
Key Changes Between 2024 and 2025 Office Ratings
🎯 Critical Changes – Two Offices Swapped Positions
Major Improvement:
- Staines: Dramatically improved from 1 star → 5 stars (+4 stars)
- This is the most significant positive change
- Went from worst performer to top tier
Major Decline:
- Wigton: Dropped from 5 stars → 1 star (-4 stars)
- This is the most concerning change
- Went from perfect rating to worst performer
📊 Overall Performance Metrics

📈 Offices Maintaining Performance
14 offices maintained their ratings:
- 5-Star offices (5): Aberystwyth, Falkirk, Grindale, Llandovery, Pembroke
- 4-Star offices (5): Bishops Stortford, Durham, Glasgow, Inverness, Manchester
- 3-Star offices (3): Bridlington, Llanidloes, Newark
- 2-Star offices (1): Haselmere
⚠️ Action Items
- Immediate Investigation Required – Wigton
- Understand what caused the dramatic 4-point drop
- Identify operational, management, or supplier relationship issues
- Implement corrective actions urgently
- Best Practice Study – Staines
- Document what changes led to the 4-point improvement
- Share successful practices with other underperforming offices
- Consider Staines as a turnaround case study
- Continued Focus – Haselmere
- Still at 2 stars (unchanged)
- Needs attention to improve supplier relationships
📌 Summary
While the overall average rating remained stable at 3.875, there was a complete reversal between Staines and Wigton. The fact that 14 out of 16 offices maintained their ratings suggests consistent performance across most locations, but the extreme swap between these two offices warrants immediate investigation to understand both the success factors at Staines and the failure factors at Wigton.
I was then prompted “Would you like me to create a visualization comparing these changes or dig deeper into any specific office?“
At this point I think that sounds like a good idea – let’s see what we can do, and it creates a React output with 3 tabs ( Side by Side, Changes and a Detailed table ). I’ll just show the changes one here as an example.



So, we have connected to our OAC instance from a client tool ( Claude in this case ) and used the new MCP server with an LLM to check for datasets, describe them and perform analysis. We have then also created some visualisations of that analysed data which we can then share.
I can ask for a visualization that really focuses in on the outliers and also gives me a summary of the others, and I get an interactive react application giving me exactly what I wanted and is perfect for inclusion in a summary document.

Actually it didn’t just comment on the switch, it noted that one office only had a 2* rating and that stayed the same so there has been no improvement there and that needs attention – which is something I didn’t think about as I was focused on the “big switch” not the nuance of a poor performer not improving, but obviously the results depend on the LLM you are using and also how you prompt it.
Just like any MCP access there’s caveats around security, but this is a powerful facility which opens up a lot of possibilites with Agentic workflows using data sourced from structured subject areas and curated datasets. This is something I saw Patrick Goubet demonstrate at the UKOUG 2025 conference and will be the subject of a subsequent blog.