Exadata workshop – impressions and notes

I have read articles and specs on Exadata before, but now I have had the opportunity to assist to an Exadata Hands-On workshop.

I’m writing these impressions without talking too much about the specs, they are everywhere, some links here.

Basically, Exadata’s architecture splits processing in two different layers, Database servers (compute processing) and the Storage servers called “Storage Cells” (data processing). Each Storage Cell consist of servers with 12 internal disks each (the numbers of servers will vary from Quarter, Mid or Full rack configurations), when a request arrives, the storage evaluates and filters out unneeded rows, Oracle explains that its common to send 10x less data to the database servers, this translate into improved IO and smaller SGAs, on top of that the servers have dual ported cards 40Gb/sec InfiniBand.

Other things that Exadata can offload to the storage side is the filtering for the incremental backups and the processing power for the encryption.

Note that joins, aggregations and other complex query processing is done on the database servers, not the storage.

Also, the storage can create indexes that maintain summary of the data in memory, like MIN and MAX values, this way Oracle knows if a block contains data within the range of the where condition and skip those that don’t, storage indexes are completely automatic and transparent.

Exadata also offers two types of compression, Query (speed optimized) and Archive (size optimized), 2 levels each (HIGH and LOW), the compression is built on “Hybrid Columnar Compression Technology” meaning that the data is organized and compressed by column.

Reading compressed tables is also faster, reducing the IO in a proportion similar to the compression ratio. I saw compression ratios from 1.6 (OLTP Low) to 54.7 (Archive High), companies can save a lot on storage while speeding up the IO.

In addition to the disks, Exadata can use Flash cards, yes, that’s right, no SAS or SATA interfaces but internal cards, it can be used as CACHE which will speed up the access, specially OLTP databases, the flash memory can also be configured as ASM diskgroups.

Oracle is also marketing Exadata for database consolidation, so it includes some features as a resource manager for the IO (IORM – programmed directly on the cells) and the database caging (limit the number of CPUs a database can use) allowing companies to mix and consolidate databases while maintaining priorities.

I also noticed that Exadata relies a lot of its processing on parallelism, Enterprise manager allows to see the parallel degree of the queries and some other important values: Database time, real time and also how much processing is being offloaded on the storage side (cool!).

The way I see it, Exadata is such a beast because:

  1. – Two layers of processing – database nodes and storage cells working together. Database servers dedicated to compute data, offloading to the storage row-filtering, compression, encryption,etc.
  2. – Brute and modular force, Powerful servers, plenty of memory and ultra fast networking, fast storage, all of them coordinating tasks to make things faster, simply put – less bottlenecks.

Please notice that I just experienced all this in a workshop, not the “real world”, I do consider that the architecture is robust and intelligent, it provides brute force and speed “bumper to bumper”, so far, I love it!

This entry was posted in High-Availability, Oracle Server, Performance & Tuning and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s