Sarbjit Parmar's Hello World
Menu

Oracle Real World Performance Tuning Training

5/1/2016

0 Comments

 
A few weeks back got a chance to attend Oracle’s Real World performance tuning class. This class was using Exadata hardware and Oracle 12c edition. A very interesting set of findings that I would like to present here:-
  1. Elsewhere on this site I presented a comparison of a row based database (Oracle’s row based implementation) with a columnar database (Infobright’s mysql based implementation). Whereas my comparison was between approaches not vendors, nevertheless, I thought this is the time to bring in some relevant (favoring Oracle) information to this site that builds on that comparison. Oracle in this class with a similar dataset (Oracle uses similar schema/data generator), and achieved a 16/17x compression with its Exadata edition of the database, which is in the same order of magnitude as Infobright’s implementation. This reinforces the learning that columnar storage can have significant benefits for the datasets used for analytics and use cases where there is no need to update existing data.
  2. From the client server days I knew that having the server layer between client and database was very beneficial in correctly implemented connection pooling is very important for the health of the database server, this class demonstrated how the database could come to a halt if the connections (existing, active or passive + incoming) goes beyond a certain limit. This is very important information that the application developers and maintainers need to use when architecting/designing/supporting applications. For more specific information please watch these videos(RWP#2, RWP#3, RWP#13, RWP#14) on Oracle Real-World Performance Learning Library
  3. Tom Kyte and, others have been evangelizing the use of set based processing for a while and this class demonstrated that case with billions of rows processed in a matter of minutes using Exadata edition of Oracle. My own experience in this area has been with processing data with 10s of millions rows on Oracle 9i, which would take hours, and lots of false starts(presented elsewhere on this site). The key learning from this class was that it’s important to size the hardware I/O bandwidth to achieve the processing times needed, rather(may be besides) the need to tweak other knobs. Some of the feature enhancements leading up to 12c edition help this cause, especially around the amount of default logging(redo/undo) when performing bulk/direct loads. Use of full table scans; partitioning features; cautionary use of indexes; using correct metadata but not enforcing all the constraints; and gathering stats at the correct place are some of the other important implementation details that should go into this approach. This was not all new information but a good reinforcement of the fact that the Real World Performance team still stands by this approach (not just marketing, sales or product management teams).
  4. Learned about the use of parallel/pipelined functions that in the real world can come in useful for re-implementing code that would be very costly to rewrite. This feature has been around for a while but this was first time I saw a use case for putting it to use.
  5. The need to use caution when implementing degrees of parallelism. The best approach is to implement an approach using maximum number of parallel servers between 2-10xCPUs based on the workload and support infrastructure (disk/network).
  6. Using bind variables was emphasized again (again, as it has been since Oracle 7). This is a feature that I have learned to know well but can’t be confident that Oracle has been able to work out all the issues around this area. We know why bind variables are important (parsing costs/shared pool, etc.). But in real world when rewriting applications changing the applications to use bind variables can be a time consuming effort. And I know time is money. So there is some tempting feature available to overcome that hurdle by setting CUSOR_SHARING = FORCE or SIMILAR. The Real World Performance team dissuades from using this feature, and I would endorse based on my real world experience as well. Then there is the whole business about CBO optimizer creating execution plans based on bind peeking, and then picking plan based on the first incoming row which when not representing the rest of the dataset could lead to picking up a suboptimal plan. Oracle has come up with features that help alleviate that issue by offering some new features around aging out those plans(more later in a separate post); but I certainly wish that CUSOR_SHARING feature be removed if this has been leading to all that grief in the real world. Which I know is not simple for a product with this large a footprint, but there could be some creative solutions available to render it harmless.
  7. It was great to see that newer versions allow gathering statistics while data is being loaded, and some new features around use of histograms. I would encourage the readers to read up on statistics gathering changes with each version.
  8. Oracle has implemented a new feature which uses a transparent use of a memory based column store which could be optimized for query or update type of applications and works transparently from the code and could come in very useful for meeting performance goals in your application.
The best learning however the reinforcement of a lesson that I learned a while back was that, before the use of any new feature (e.g. Memory based columnar store) make sure that all other avenues for making the application optimal are explored. A “costly/bad” approach may provide some relief and may as well end up hurting the performance goals. The way to get there is by spending the time on identifying and addressing the root causes of the problem.


0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    About Sarbjit Parmar

    A practitioner with technical and business knowledge in areas of  Data Management( Online transaction processing, data modeling(relational, hierarchical, dimensional, etc.), S/M/L/XL/XXL & XML data, application design, batch processing, analytics(reporting + some statistical analysis), MBA+DBA), Project Management / Product/Software Development Life Cycle Management.

    Archives

    March 2018
    May 2016
    January 2015
    March 2014
    February 2014
    January 2014
    October 2013
    September 2013

    Categories

    All
    Acid
    Analytical Query
    Bigdata
    Columnar Database
    Compression
    Database
    Database Recovery
    Data Warehouse
    Data Warehouse
    Hierarchy
    Infogbright
    Informatica
    Interview
    Jboss
    Jgroups
    Job Scheduling
    Linux
    Mdm
    Metadata
    Normalization
    Oracle
    Performance Tuning
    Pivot
    PL/SQL
    Redo
    Repository
    Scheduler
    Siperian
    Sql
    Undo

    RSS Feed

Proudly powered by Weebly
  • Business
  • Technology
  • About
  • Business
  • Technology
  • About