Sarbjit Parmar's Hello World
Menu

Oracle Real World Performance Tuning Training

5/1/2016

0 Comments

 
A few weeks back got a chance to attend Oracle’s Real World performance tuning class. This class was using Exadata hardware and Oracle 12c edition. A very interesting set of findings that I would like to present here:-
  1. Elsewhere on this site I presented a comparison of a row based database (Oracle’s row based implementation) with a columnar database (Infobright’s mysql based implementation). Whereas my comparison was between approaches not vendors, nevertheless, I thought this is the time to bring in some relevant (favoring Oracle) information to this site that builds on that comparison. Oracle in this class with a similar dataset (Oracle uses similar schema/data generator), and achieved a 16/17x compression with its Exadata edition of the database, which is in the same order of magnitude as Infobright’s implementation. This reinforces the learning that columnar storage can have significant benefits for the datasets used for analytics and use cases where there is no need to update existing data.
  2. From the client server days I knew that having the server layer between client and database was very beneficial in correctly implemented connection pooling is very important for the health of the database server, this class demonstrated how the database could come to a halt if the connections (existing, active or passive + incoming) goes beyond a certain limit. This is very important information that the application developers and maintainers need to use when architecting/designing/supporting applications. For more specific information please watch these videos(RWP#2, RWP#3, RWP#13, RWP#14) on Oracle Real-World Performance Learning Library
  3. Tom Kyte and, others have been evangelizing the use of set based processing for a while and this class demonstrated that case with billions of rows processed in a matter of minutes using Exadata edition of Oracle. My own experience in this area has been with processing data with 10s of millions rows on Oracle 9i, which would take hours, and lots of false starts(presented elsewhere on this site). The key learning from this class was that it’s important to size the hardware I/O bandwidth to achieve the processing times needed, rather(may be besides) the need to tweak other knobs. Some of the feature enhancements leading up to 12c edition help this cause, especially around the amount of default logging(redo/undo) when performing bulk/direct loads. Use of full table scans; partitioning features; cautionary use of indexes; using correct metadata but not enforcing all the constraints; and gathering stats at the correct place are some of the other important implementation details that should go into this approach. This was not all new information but a good reinforcement of the fact that the Real World Performance team still stands by this approach (not just marketing, sales or product management teams).
  4. Learned about the use of parallel/pipelined functions that in the real world can come in useful for re-implementing code that would be very costly to rewrite. This feature has been around for a while but this was first time I saw a use case for putting it to use.
  5. The need to use caution when implementing degrees of parallelism. The best approach is to implement an approach using maximum number of parallel servers between 2-10xCPUs based on the workload and support infrastructure (disk/network).
  6. Using bind variables was emphasized again (again, as it has been since Oracle 7). This is a feature that I have learned to know well but can’t be confident that Oracle has been able to work out all the issues around this area. We know why bind variables are important (parsing costs/shared pool, etc.). But in real world when rewriting applications changing the applications to use bind variables can be a time consuming effort. And I know time is money. So there is some tempting feature available to overcome that hurdle by setting CUSOR_SHARING = FORCE or SIMILAR. The Real World Performance team dissuades from using this feature, and I would endorse based on my real world experience as well. Then there is the whole business about CBO optimizer creating execution plans based on bind peeking, and then picking plan based on the first incoming row which when not representing the rest of the dataset could lead to picking up a suboptimal plan. Oracle has come up with features that help alleviate that issue by offering some new features around aging out those plans(more later in a separate post); but I certainly wish that CUSOR_SHARING feature be removed if this has been leading to all that grief in the real world. Which I know is not simple for a product with this large a footprint, but there could be some creative solutions available to render it harmless.
  7. It was great to see that newer versions allow gathering statistics while data is being loaded, and some new features around use of histograms. I would encourage the readers to read up on statistics gathering changes with each version.
  8. Oracle has implemented a new feature which uses a transparent use of a memory based column store which could be optimized for query or update type of applications and works transparently from the code and could come in very useful for meeting performance goals in your application.
The best learning however the reinforcement of a lesson that I learned a while back was that, before the use of any new feature (e.g. Memory based columnar store) make sure that all other avenues for making the application optimal are explored. A “costly/bad” approach may provide some relief and may as well end up hurting the performance goals. The way to get there is by spending the time on identifying and addressing the root causes of the problem.


0 Comments

Columnar Databases II

10/8/2013

2 Comments

 
Continued from Columnar Databases I ...

So if a situation allows us to live with the limitations of the columnar databases, how good are columnar databases. To find out for myself I set up an experiment to compare a popular row based database (Oracle 11g) with compression turned on with a columnar database (Infobright) that relies on an open source database engine (MySQL). I also set the experiment to mainly explore the compression in storage, rather than any query performance as I did not have resource to set up that elaborate an experiment. 

For OLTP data structures(TCP-H(tm) Bench Mark) the Oracle compressed row data store used about 10 GB storage including some indexes, which are required for such databases. Infobright database on the other hand showed a 1.9GB. This is about 1/5th the storage required. This is a significant saving, when there are not a lot of indexes in row store, and if more indexes were added for performance reasons it would have shown even better comparison on storage requirements.

For Star Schema Bench Mark database, the data extracts were of the range of 6.7 GB of raw ASCII data, when pulled from Infobright (it by default provides quoted strings, etc) vs. about 6 GB of raw ASCII data when pulled from Oracle tables, using custom pipe delimiter. When loaded into Infobright it compressed the data into a size of about 800MB, again using no indexes. When loaded into Oracle database with the same compression scheme as before the data used about 6.5GB.  From these observations, we conclude that while Oracle provides compression, considering that we had 3 large indexes on the lineorder, and smaller indexes on smaller tables as well. However the columnar database (Inforbright) provided an order of magnitude compression compared to the raw text data and row store's (Oracle) equivalent database with basic compression. Due to lack of appropriate storage (exadata machine) I could not test the more aggressive compression scheme available from Oracle row store database.

Query timings were better in case of Infobright database where  the large fact table extraction to flat file took about 12 minutes and in case of Oracle the same took about 42 minutes. Thus highlighting the benefits of smaller storage, at the least, as the queries did not use any index for Oracle either as these queries gets all the data from the tables in the join. 
 
The star schema shows a higher compression ratio for the columnar database, even though, it uses mainly numeric type data types in the large fact table.

References:-
 
Abadi, D., Boncz, P., Harizopoulos, S. "Column-oriented Database Systems" in VLDB ’09, August 24-28, 009, Lyon, France.
Abadi, D.J., Madden, S.R., and Ferreira, M. "Integrating compression and execution in column-oriented database systems" In Proc. SIGMOD, 2006.
Hodak, W., Jernigan, Kevin, "Advanced Compression with Oracle Database 11g Release 2" An Oracle White Paper from Oracle corporation, September 2009
Oracle, "Oracle 11g SQL Reference Guide" from otn.oracle.com
Oracle, "Oracle 11g Utilities Guide" from otn.oracle.com
Inforbright.org, "Direct data load guide" available from inforbright.org
2 Comments

Columnar Databases - Part I

10/6/2013

0 Comments

 
With the need for processing more and more data and also the availability for more data captured electronically from various data collection points through commercial, non-profit, government or research communities. This phenomenon is termed as Big Data in industry parlance. To make sense out of
  this data being gathered it requires large amount of processing power. This data may be available in granular form or as documents, and sometimes both may be co-related. Over the period of time we notice that the nature of the data gathered is getting changed. Traditionally most of the data was transactional in nature, requiring CRUD(create, update, delete) operations. Now a larger amount of data is being created that is usually not updated and may only be deleted when it is no longer needed, usually after a longer period of time than in the transactional sense. While OLTP database provided the ability to store the CRUD operations with ACID(atomic, consistent, isolated and durable) properties for handling more granular data, they were then enhanced to add storage of various types of documents(text, pictures, etc.) again with the OLTP type of transactions in mind. These databases typically use a normalized data model for storage.  But the need for providing ACID guaranties, and to handle different type of data volumes for analytical needs, the data could no longer be contained in those models. Therefore the data warehouses were designed using same type of databases, but with different type of data models (typically dimensional, though not always). Data warehouses allowed separation of data from the OLTP systems, but still grew fairly large in volumes, and typically serve more read type operations than frequent updates or writes.

While working with large volumes of data, I noticed that at times a large number of columns in a table have low cardinality, but the overall size of the table itself may be fairly large. This led me to believe that one could reduce the size of the data as stored on the disk if compression techniques are used. Since disk access is usually the slowest part of the access of a database a smaller footprint of data would presumably lead to faster retrieval of the data from the slower medium, however, there would be associated CPU cost that would be incurred in compressing the data. Since data warehouses carry the largest amounts of data there presumably would be tradeoff scenarios in using  one or the other technique. Even though the normalization theory is about reducing the redundancy of duplicate data, and therefore providing most efficient storage, there has to be other techniques that could be combine with this to reduce the overall query timing. One of the recent technologies that focus on this aspect is the columnar storage based databases.

 [Abadi, Boncz, Harizopoulos, 2009] provide a brief introductory, tutorial to the columnar databases. They describe the columnar databases as
 "Column-stores, in a nutshell, store each database table column separately, with attribute values belonging to the same column stored contiguously, compressed, and densely packed, as opposed to traditional database systems that store entire records (rows) one after the other."

They trace the history of the column stores back to 1970s, when the usage of transposed files were explored. In 1980s the benefits of decomposed storage mode(DSM) over row based storage were explored. Its only in 2000s that  these data stores finally took off. 

Because of the affinity of the data values stored contiguously on disk pages for each of the columns the data lends to better compression schemes, that may be light weight in their CPU utilization but still provide heavy
compression. 

However, these databases are challenged in their ability to provide updates and also in tuple construction required for use in applications which access data through ODBC/JDBC type interfaces. The tuple construction is required to present the data in row format used by these access applications.

Continued at Columnar Databases II
0 Comments

    About Sarbjit Parmar

    A practitioner with technical and business knowledge in areas of  Data Management( Online transaction processing, data modeling(relational, hierarchical, dimensional, etc.), S/M/L/XL/XXL & XML data, application design, batch processing, analytics(reporting + some statistical analysis), MBA+DBA), Project Management / Product/Software Development Life Cycle Management.

    Archives

    March 2018
    May 2016
    January 2015
    March 2014
    February 2014
    January 2014
    October 2013
    September 2013

    Categories

    All
    Acid
    Analytical Query
    Bigdata
    Columnar Database
    Compression
    Database
    Database Recovery
    Data Warehouse
    Data Warehouse
    Hierarchy
    Infogbright
    Informatica
    Interview
    Jboss
    Jgroups
    Job Scheduling
    Linux
    Mdm
    Metadata
    Normalization
    Oracle
    Performance Tuning
    Pivot
    PL/SQL
    Redo
    Repository
    Scheduler
    Siperian
    Sql
    Undo

    RSS Feed

Proudly powered by Weebly
  • Business
  • Technology
  • About
  • Business
  • Technology
  • About