Monday, February 28, 2011

Why You Need to Partition the Database and Applications To Scale with Oracle RAC/Exadata

On Friday I talked about the fundamental difference between Oracle RAC / Exadata and DB2 pureScale. And now I want to dive deeper into why RAC applications need to be cluster-aware to perform and scale well.

Let’s use a small example to show the differences. Let’s consider a 2-server (node/member if you are RAC or pureScale) cluster and a database that is being accessed by applications connecting to these servers.

In the RAC case, if a user sends a request to server 1 to update a row, say for customer Smith, it must get that row from the database into it’s own memory, then it can work on that row (i.e. apply the transaction). Then another user sends a request to server 2 asking it to update the row for customer Jones in the database. First server 2 must read that row into memory and then it can work on it. So far there are no issues, but let’s go on.

Now what happens if another user wants to update the data for customer Jones, but is routed to server 1?  In this case server 1 doesn’t have the row, it only has the row for customer Smith.   So server one sends a message over to server two asking it to send the row for customer Jones over to server 1. Once server 1 has a copy of the row for customer Jones it can then work on that transaction. Now server 1 has both rows (Jones and Smith) so if a transaction affecting either customer comes to it, it can be processed right away.

The problem now is that any transaction (for customer Smith or Jones) that goes to server 2 requires that server to go to server 1 to get the resource since it has no rows that it can work on directly.

As transactions are randomly distributed amongst the two servers (in order to balance workload) the rows for the customers must be sent back and forth between the two servers. This results in very inefficient use of resources (too much network traffic and a lot of messages between the two servers to coordinate access to data).  This limits the scalability of a RAC cluster and also impacts performance. To make RAC scale you have to find the bottlenecks and remove them. In most cases the bottlenecks are too much data being shipped back and forth between nodes (difficult to find in the first place because you now have to look in many different places across the cluster to find the hot spots).  To solve the problem you have to repartition your application and your database to make it scale.

DB2 and pureScale on the other hand provide near linear scalability our to over 100 members (servers) with no partitioning of the application or the database.

Friday, February 25, 2011

Hey Oracle Customers - Moving to DB2 and pureScale is easier and cheaper than moving to Exadata

So, what is the best upgrade path from a single instance of Oracle?

Oracle says moving to Exadata is as easy as 1-2-3!
If you are an existing Oracle customer, you have probably been getting a lot of pressure to move to Oracle’s shiny new toy, Exadata. You have probably been hearing that you can consolidate all of your databases onto a single Exadata system. But, it is not as easy as it seems!!!

The Oracle upgrade is harder than just moving data
If your existing applications are running on a single instance of Oracle (i.e. not on Real Application Clusters – aka RAC), then there is a lot more involved than simply moving your data. In order to get good performance on Oracle RAC (and Exadata is an Oracle RAC cluster with specialized I/O servers) you need to modify your database schemas and applications to make them RAC-aware. 

DB2 pureScale makes it quick and easy to upgrade
DB2 pureScale on the other hand provides transparent application scalability, so you can quickly move your data and applications to DB2, and not have to worry about making changes to the schema and application to make them cluster aware.

The difference between RAC and DB2 pureScale
The reason that RAC requires cluster awareness and DB2 pureScale does not is due to the fundamental differences in their architectures. While both use a shared disk mechanism for scale out, that is the only real similarity. Oracle uses a distributed locking mechanism in RAC, while DB2 uses a centralized locking mechanism in pureScale.

Actual work involved to move to Exadata versus pureScale


Tasks and Time Required


Move database and schema

Days to weeks

Re-partition the database

Not Required
Weeks to Months

Modify the application to partition data access

Not required
Not Required

SQL Remediation

Couple of days

Test and Tune

Multiple weeks

Total Time


The data movement, test and tuning time will be similar, but the time to “fix” the application will be significantly longer with Oracle RAC and Exadata than with DB2 pureScale.

On Monday I will dig into the details of why you need to partition your database and application to make it RAC-aware.

Tuesday, February 15, 2011

HP Itanium Customers can save money by switching to DB2

Oracle expects HP Itanium customers to pay more for running the Oracle database software, but reduced the price for their own Sun Hardware.

Customers can save money on license and maintenance fees by moving their applications off Oracle to DB2 9.7 or SQL Server.

What is the most cost effective way to move off the Oracle database?

In December Oracle hiked the price of their database software on HP Itanium based servers, leaving customers with two choices: Pay up, or pay to move to a different database. Since HP has partnered so closely with Microsoft over the past few years, you would think that SQL Server might be a natural place for these customers to move. But many of these customers are running Linux, not Windows, and require an enterprise ready database server. In addition, the migration from one database to another has been a long and arduous path that can cost as much or more than customers might save in the lower license costs.

Enter DB2 9.7 for Linux, UNIX, and Windows the world of database migrations has undergone a paradigm shift. DB2 9.7 can run Oracle PL/SQL and Sybase T-SQL with little to no change since it includes a “compatibility layer”. But, this compatibility layer is not a translation; this support is built directly into the DB2 database engine itself, so that there is no loss of speed due to translation.

Now, a customer that wants to consolidate multiple databases, or move off of a database due to skyrocketing costs can quickly evaluate their application to determine what, if any, statements they might need to change to run on DB2. They can then quickly move the database schema and data into DB2, turn on DB2’s self-tuning memory to tune the system, and start running on DB2.

Not only does DB2 9.7 drastically lower the cost of migration, and allow you to migrate in days or weeks rather than months, it also reduces risk since very little code needs to be changed, resulting in very little change to existing test cases, and little chance to introduce bugs since the developers are still coding in the tools and language that they are used to.

Don't just take my word for it, Analysts like Forrester and Gartner agree.