Tyler Muth’s Blog

Technology with a focus on Oracle, Application Express and Linux

Archive for the ‘APEX’ Category

Logger Project Moved Temporarilly

Posted by Tyler Muth on November 9, 2011

The site samplecode.oracle.com was decommissioned recently. I was hosting a number of projects there including “Logger”, my PL/SQL instrumentation package. Until I find a new home, here’s a temporary link to the latest release (1.4) or if you just want to view the readme use this link. I’ll update this post when I decide on a new home.

This project is still very much alive and I will continue to update it based on your suggestions. I know there are a number of them in my “queue”, I just haven’t had time. My plan is to find a public place to host it such as github where everyone can contribute.

Advertisements

Posted in APEX, Oracle, PLSQL | 26 Comments »

Logger 1.4.0 Released

Posted by Tyler Muth on August 11, 2011

This is just a minor update to logger (project page here) to fix a bug in detecting DB Version 11.2 on RAC.

One important point to note is that in APEX 4.0 (or possibly 4.1), the APEX team removed the “create any context” priv when creating a schema associated with provisioning a workspace. Logger needs this priv. So, this means you won’t be able to simply install logger through the SQL Workshop without granting this priv to the schema owning logger. The biggest impact will be on hosted instances such as apex.oracle.com as you can’t simply ask someone to grant this priv to you (and please don’t try). I will try to see if it’s easy to work around this issue and what the implications are in the next few weeks.

Posted in APEX, Oracle, PLSQL | 3 Comments »

APEX is 57.4x Faster on Exadata

Posted by Tyler Muth on April 12, 2011

…is just about as useful (and accurate) as saying APEX is just as fast on my laptop as it is on Exadata. However, it makes for a great title. Now that I hopefully have your attention, lets talk about the reasons for this post. As many of you know, APEX used to be my primary specialty but I’ve changed roles a bit over the last year at Oracle and my primary focus is now Exadata. I’ve received an increasing number of questions about APEX on Exadata lately so I thought I’d put together some thoughts on them. Additionally, I’ve had a lot of Exadata questions recently that lead me to the conclusion that it’s not as well understood as I thought, at least within the groups of people I regularly interact with. Contrary to popular belief, it’s not a mythological beast powered by ferry dust and unicorn blood. It’s a very well engineered solution that solves a number of exceptionally challenging problems. In my opinion, the more you understand about it, the more you’ll appreciate it.

Does APEX Work on Exadata?

Yep, Exadata is just Oracle. Let me say that again: It’s just Oracle. Exadata runs an 11.2 Database on Linux x64. It’s the exact same binary install if you download those binaries for “generic” Linux x64 11.2 from OTN. So, if your code / app runs on 11.2, it runs on Exadata. There are a few exceptions to this rule for apps that don’t play well with ASM. So, more accurately, if it runs on 11.2 on Linux x64 with ASM and RAC (optional, but desirable), it runs on Exadata. The APEX Dev Team (my old friends and colleagues) did absolutely nothing to port their code to Exadata. I’ve run a number of customer benchmarks with customer’s data and queries and have yet to make a single change to their queries or structures to make them work on Exadata. The only thing I’ve done is to set indexes invisible in Data Warehouse benchmarks, but that was more for fun…more on that later.

What Problems Does Exadata Solve?

The most obvious problem is solves is the I/O bottleneck. While I’m not in the same league as many of the well known performance architects in the Oracle community, I spend a fair amount of time talking to customers about performance problems. In my estimation, the vast majority of the time the performance problems I encounter come down to I/O. Most DBAs (myself included) don’t know near enough about storage. Compounding that problem is the fact that there are often communication “issues” between storage groups and the DBAs.

Lets talk about how we get to data for second. I’ll try and use a Tom Kyte example. Lets take an example of reading a phone book and assume it has an index (paper style) in the back that tells you which names are on which pages. If you wanted to count all of the names in the phonebook, it would be much quicker to just scan through all of the names from beginning to end than it would to go to the index to find each and every name. This is an example of why full table and partition scans are often faster for data warehouse queries than index-based access paths. Flip that around and tell me the phone number for the “Muth’s”, and it would be much faster to look at the index, find out what page the “Mu”’s start on and go to that page. Think of this as an OLTP style query.

So, I/O throughput is particularly important in DSS and Data Warehousing environments because you’re scanning through whole tables and partitions. However, the vast majority of the customers I talk to are running a SAN attached via 2 or 4 Gbps (gigabit per second) Fibre Channel. That connection provides 200 or 400 MB/s (megabytes per second) respectively. If you want to run a full scan on a 200 GB table and you’re connected via 4 Gb FC, some simple math will tell you it will never run faster than 512 seconds or 8.5 minutes Now, compare that to the disk-only 25 GB/s (gigabytes per second) scan rate of an X2-2 full rack with High Performance disks and we can now scan that table in 8 seconds. Our scan rate jumps to 75 GB/s with Flash Cache, but I’ll stick to disk-only numbers for scan operations. It’s easy to see how a query in this example could run 64x faster on Exadata which supports my claim in the title of this post. The results I’ve obtained in POVs I’ve personally worked on have been incredible. On several occasions I’ve had prove to customers that the results were real, especially in the case of I/O intensive operations.

You don’t need to have a 10 TB database to have I/O problems either. The last POV I ran was on a 500 GB data set (including indexes). The database is a shared resource across many organizations and applications. As they’ve built more applications on this database, their indexing strategy hasn’t exactly kept pace. Their top wait events from statspack are mostly I/O related. Additionally, they copy this database nightly to a mirror server that they use for ad-hoc reporting, which as you might expect has even more I/O issues. It’s hard to index for ad-hoc…

What does this have to do with APEX? I’ve seen (and written) plenty of APEX apps that are front-ends to a data warehouse. If you are working with large data sets, doing a lot of aggregation, or allowing a lot of ad-hoc queries, traditional index strategies are often difficult to develop and of marginal value. In many cases like these, the best strategy for performance is to improve the I/O of your system.

So We Don’t Need Indexes Anymore?

I’ve heard this concept from far too many people not to address it. For Data Warehouse / DSS style queries, it is often the case with Exadata that the optimizer will choose a full table or partition scan access path, and consequently not use indexes. In several Data Warehouse POVs I’ve simply set all indexes invisible without negatively impacting performance since the optimizer wasn’t using them anyway. You still need indexes for primary keys, uniqueness, etc, but I think it’s fair to say that implementing a Data Warehouse on Exadata will allow you to eliminate many (dare I say “most) of your indexes. This allows you to reclaim a lot of disk space and certainly makes data loads faster.

However, this concept has nothing to do with OLTP style queries! If you want to retrieve a row based on it’s primary key, which do you think is faster 1) Traversing a few blocks in a b-tree index to find the rowid, then plucking the single-block that contains your row from disk (or RAM) or 2) scanning the whole multi-gigabyte table? We still need / want indexes for OLTP. Dropping all of the indexes on a busy Exadata-hosted OLTP application would be disastrous. This also applies to the APEX builder itself. Those tables are heavily indexed as other than some of the administrative reports, all of the queries are going after a few specific rows to render your page.

But I Thought Storage Indexes Replaced Traditional Indexes

Uh, no. Storage indexes are often mis-understood. They are unique to Exadata and implemented automatically by the storage cells. I actually like to think of them as anti-indexes. They store the high and low values of number and date columns for a storage unit (~1 MB). As we are scanning through the table or partition, and the query has a predicate that filters on one of these columns, we know we can skip all of the blocks in the storage unit if the value we are looking for falls outside of those high and low values. The key word there is “scanning”. They are used in data warehouse style queries, but not OLTP style queries. Clearly, they are no replacement for traditional indexes in OLTP applications.

Many of My APEX Apps are OLTP, Will They Run Faster on Exadata?

Potentially. Even though those apps shouldn’t be doing a lot of table scans, they will often still incur some physical I/O in the form of single-block reads. If the size of your “hot data and indexes” for all of your applications is larger than the RAM you have allocated to your SGA, you’re going to incur physical I/O. The time difference between reading a block from the buffer cache to reading it from disk dramatic. Buffer cache reads should be in the micro-second range whereas reads from disk can take 10’s of milliseconds. Your single block read times essentially fall off a cliff when you go to disk.

Exadata added Flash Cache in the second (V2) and third (X2-2, X2-8) versions. It’s essentially solid state storage (as cards, not disks) in the storage cells that is configured as a write-through (read) cache. So, if you ask for a block and we don’t find it in the SGA, we go to the storage cells. If it’s in Flash, we can return it from there. If not, we grab the block off disk, then optionally copy it to Flash so the next time we need to read that block from “disk” we can return it from Flash instead. The single-block read times from flash are often under a millisecond. It provides a nice step down in access time between RAM and disk. A full rack has 5.3 TB of Flash Cache, so we can cache A LOT of blocks there. I’ve skipped a lot of details here, but I can tell you from the results of several OLTP POVs that I’ve worked on that it plays a very significant role in OLTP performance. There’s a nice white paper on Flash Cache here (PDF).

APEX is Only for Departmental Applications, Why Would I Run it on Exadata?

Anybody connected to the APEX community know that this has a lot more to do with product positioning than it does actual capabilities. There are plenty of enterprise-class APEX apps there. One great example is the new Oracle Store. APEX is a great front-end to database centric applications. Exadata is a great platform for databases. Enough said.

Posted in APEX, Exadata | Tagged: , , | 5 Comments »

APEX 4.0 Released

Posted by Tyler Muth on June 23, 2010

Just a quick note to let everyone know that APEX 4.0 is now available for download here.  I’m really excited for this release, especially plug-ins, dynamic actions and Web Sheets.

Posted in APEX, Oracle | Leave a Comment »

APEX Packager Utility

Posted by Tyler Muth on February 2, 2010

I’ve been working on a pretty substantial APEX application with Jason Straub and Sharon Kennedy for the last x months (where x > estimated time). It will likely be an APEX Packaged application which consists of the APEX Application, DDL scripts for all schema objects, and install scripts for ~20 image / JavaScript / CSS files.  The process of creating a build is just painful.  It involves way too much point and click and way too much time. I’m not faulting the APEX team for this, as very few people create packaged applications and their time is much better spent on other features (and there are a ton of cool features in 4.0). So, I’ve always had it in the back of mind that I should have learned Perl as it’s very versatile language and really shines where my primary skill-set of SQL, PL/SQL, and JavaScript falls flat.  Perl was designed to work with files and this problem was all about files. There’s no better way to learn a new skill than to have a real project to apply it to…

So, I set aside some evenings a few hours on the weekend here and there and wrote the APEX Packager.  Here’s the description from the project page in case you don’t have an OTN account:

APEX Packager is a utility written in Perl to create APEX Packaged Applications (examples, documentation). It’s goal is to automate the process of adding supporting object scripts such as PL/SQL packages and table DDL, and to automate the process of adding images / JavaScript / CSS to an application. The manual process for each of these steps requires the developer to upload each script and image individually. In the case of images, an additional step is required to generate the hex-encoded version of the image as a script, then upload that script as a supporting object. Once properly configured APEX Packager will automatically perform both of these steps offline without uploading any files. A test-case of APEX Packager was to take a moderately sized APEX Application, 25 DDL scripts, and 1000 images and package them into a single application in 7 seconds.

APEX Packager WILL NOT generate your DDL scripts for you or reverse engineer a schema. This process is better suited for tools like SQL Developer or Data Pump Export.

This utility consists of 2 perl files: apex-packager.pl and config.pl. apex-packager.pl takes in 1 argument which is the config file. This allows you to create different config files for different applications / scenarios and simply pass in the config file when you run the script.

It’s definitely not for everyone. I suspect the primary audience will be the APEX Team and any of the APEX experts that are producing example applications or applications delivered to customers. If you’re new to APEX, this is not for you.  There are a million other resources and utilities you should focus on first.

As a final thought, keep in mind this was my first adventure in Perl. Please don’t use the code as an example of best practices. If you know Perl, I’d love to get your feedback and suggestions.  Even better, please join the project and contribute! I can honestly say this will not be my last Perl utility.  It was such an easy language to learn and there are so many CPAN modules I’d love to explore…

Posted in APEX | 9 Comments »

Logger, A PL/SQL Logging and Debugging Utility

Posted by Tyler Muth on November 3, 2009

I’ve been gradually building out a logging utility for PL/SQL over the last year or so. It’s been a huge help on some complicated projects, especially in APEX Applications with a lot of AJAX. I’m sure most people reading this have been stuck on a problem, created a logging table, and sprinkled a few inserts in their code to see what’s going on and when. It’s amazing how much faster you resolve these issues if the table is already there and the inserts are now just a short procedure call. Anyone that’s ever used Firebug probably can’t imagine it without console.log() or console.dir(). Yes, you can use dbms_output.put_line or htp.p, but those fall over pretty quick in a web environment with HTTP Posts or worse, a lot of AJAX Posts / Gets. Additionally, if you’re a Tom Kyte groupie, you’ve probably heard him talk about the value of instrumentation more than once.

Now that you know the “why”, lets talk about what Logger is.  It’s just a couple of tables and a PL/SQL package that make all of this easier for you.  I’ll give you a brief summary of features here, but for full details you should go to the project page at https://logger.samplecode.oracle.com.  So, the super short demo is:

logger@orcl> exec logger.log('hello world');

PL/SQL procedure successfully completed.

logger@orcl> select * from logger_logs_terse;

     ID LOGGER_LEVEL TIME_AGO             TEXT
------- ------------ -------------------- ---------------------
     48           16 35 seconds ago       hello world

Features Include:

  • Easily Enable / Disable all logging
  • Minimal Overhead.  There’s even a NO-OP version of the package that you can use for production if you’re really paranoid. It’s just a stub of the logger package that doesn’t write to or depend on any tables (or any other objects).
  • Automatically purges debug messages older than 7 days.  This is completely configurable
  • Ability to time blocks of code
  • Log sys_context(‘userenv’,”) variables
  • Log APEX Item Names and Values
  • Automatically captures key session information such as module, action, client_identifier including APEX session ID, timestamp, call stack.

I’d like to thank all of the community members that tested and contributed ideas for logger including:

I would love to get more feedback from the community as well.  You can either comment here, or use one of the 3 discussion forums I created for logger listed on the project page just above the download link.

The Fastest Way to Store / Retrieve a Cross Session Variable

Posted in APEX, Oracle, PLSQL | Tagged: , , | 44 Comments »

APEX “Application Process” Bookmarklet

Posted by Tyler Muth on October 12, 2009

If you’re an APEX developer and doing a lot of AJAX work, you probably spend a lot of time clicking between pages and the “Application Process” section of APEX.   In APEX 3.1 and 3.2 it takes 2 clicks on links that are on very different parts of the page.  Unless you do this a lot, you probably think I’m crazy for complaining about this, but if you are doing a lot of AJAX development, it’s enough to drive you insane.  I even kept a separate browser (Safari) open just for App Processes.

Solution:  I wrote a super simple bookmarklet that reduces this to one click.  Since WordPress.com is blocking my ability to post a link with JavaScript code in it, you’ll have to manually create a new bookmark on your web browsers bookmark toolbar using the following for the URL:

javascript:location.href="f?p=4000:4207:"+$v('pInstance')

Now, once you’ve logged into the APEX builder that bookmark will take you straight to the Application Process section using the same APEX session.

Posted in APEX, Application Express, Oracle | 4 Comments »

jApex – A jQuery Plugin for APEX

Posted by Tyler Muth on August 19, 2009

In a previous post I proclaimed that jQuery Selectors Will Change Your Life.  While working on an AJAX centric APEX project, I wanted to use jQuery selectors to post data from an APEX page to an application process so I wrote a jQuery plugin. Over the course of this project the plugin has grown into something that I think the community would find useful.  Essentially it serves the same purpose as Carl Backstrom’s htmldb_get() function, but adds the power of jQuery selectors.  Carl was actually the one that introduced me to jQuery, and I’m sure he would have written something similar, only much better and in half the time.

I think the best way to introduce it is with a few quick examples.  I have a full set of working examples on apex.oracle.com here, but lets start with some simple examples to give you an idea of how it works.

APEX Items

The following code sends every item on an APEX page to the application process “SOME_APP_PROCESS”. It will then popup an alert with the results from the application process. Note that we did not have to specify each item individually, the jQuery selector (documented here) grabs them all.

var options = {
 appProcess: 'SOME_APP_PROCESS',
 pageItems: $(':input'),
 success:
   function(data){
     alert(data);
 }
};

$.jApex.ajax(options);

If we want to send all items inside a div with ID “employeeDiv” and send the results to the Firebug console:

var options = {
 appProcess: 'SOME_APP_PROCESS',
 pageItems: $('#employeeDiv :input'),
 success:
   function(data){
     console.log(data);
 }
};

$.jApex.ajax(options);

Want to send the items P1_NAME, P1_EMAIL, P1_PHONE?

var options = {
 appProcess: 'SOME_APP_PROCESS',
 pageItems: $('#P1_NAME, #P1_EMAIL, #P1_PHONE'),
 success:
   function(data){
   console.log(data);
 }
};

$.jApex.ajax(options);

Other Features

  • You can also send data to the x01-x10 parameters.  There are several examples of this in my sample app.
  • I had already written code to support sending arrays to the f01-f50 parameters used by Tabular Forms.  I really don’t like Tabular Forms and rarely, if ever use them.  However, I figured someone would want / request this functionality so I enhanced it a bit to better support Tabular Forms.  Keep in mind this functionality has not been tested much, so use at your own risk.

Tips

  • $ is the alias for jQuery
  • jQuery plugins almost always have 1 parameter that is object.  Each property of the object is analogous to a traditional JavaScript (or for that matter PL/SQL) parameter.
  • Parameters  can be objects, arrays, scalars or even functions.  Most people are used to scalars, so passing a function in as parameter / property can really be confusing at first.

Download

You can download the jQuery plugin here and the example APEX application here.

Update: I hosted the files in a more permanent location.

Posted in APEX, JavaScript / AJAX | Tagged: , , | 27 Comments »

Octagonal Peg in a Hexagonal Hole (APEX Backward Compatibility)

Posted by Tyler Muth on June 4, 2009

I was going to title this post “Hacking away at an APEX 3.2 app until you can wedge it into a 3.1 instance”, but I shortened it to the current title.  This week I went to import an application I built in APEX 3.2 for an internal customer, only to find that their APEX instance was still on 3.1.  They support a ton of mission critical apps on that instance so upgrading wasn’t an option, no matter how hard I begged.  So, I began the tedious task of editing the export file until I was able to import it.

Just to be clear, this process is NOT SUPPORTED in any way, shape or form by Oracle or myself, so proceed with caution. DON’T EVEN THINK of calling support with issues introduced by this technique. If you start down this path, you are on your own! I’m posting this to try and help some of the more advanced developers in this community and I do not want to introduce a support nightmare for Oracle Support or the APEX Dev Team.

Import Hacking the Slow Way

I started by trying the import in SQL*Plus as it’s much faster and more informative than the web interface for repetitive imports.  The key things that change from version to version of APEX are the procedures and their parameters of WWV_FLOW_API.  Once I edited the SECURITY_GROUP_ID, SET_VERSION, and FLOW_ID (more on these later), I began the Import > Find Error > Edit File > Repeat sequence. One thing that really helps here is a text editor that supports split-screen, synchronized scrolling such as Notepad++ or UltraEdit.  Notepad++ screenshot here.  After a few minutes I came to the conclusion that there has to be a better way.

Import Hacking the Faster Way

I started by querying the ALL_ARGUMENTS view and saving the results in my own table:

create table apex_api_parameters as
	select '3.2' version, object_name,argument_name
	  from all_arguments
	 where package_name='WWV_FLOW_API'
	   and owner = 'APEX_030200'
/

I repeated this step in each database to get the WWV_FLOW_API procedures and parameters for APEX 3.2 and 3.1.  Now I can use SQL set operations to compare them:

select object_name
  from apex_api_parameters
 where version = '3.2'
 minus
select object_name
  from apex_api_parameters
 where version = '3.1'
with common_objects as (
			select object_name
			  from apex_api_parameters
			 where version = '3.2'
		 intersect
			select object_name
			  from apex_api_parameters
			 where version = '3.1')
 select p.object_name,p.argument_name
   from apex_api_parameters p,common_objects c
  where version = '3.2'
	and p.object_name = c.object_name
  minus
 select p.object_name, p.argument_name
   from apex_api_parameters p,common_objects c
  where version = '3.1'
    and p.object_name = c.object_name

Based on the results of the first query, I know the procedures in WWV_FLOW_API that exist in 3.2 but not in 3.1.  These almost always equate to new features.  If I were comparing 3.1 to 3.0 I would see the CREATE_WORKSHEET procedure which equates to interactive reports. The second query shows the parameters to procedures that exist in 3.2 but not in 3.1.

Step by Step Hacking

  1. Make a backup copy of the app you plan to edit.
  2. Using the APEX Application Builder in the newer (source) instance, edit the application to remove any major features that you know don’t exist in the older (target) instance, such as interactive reports.
  3. Export your application from the source instance.
  4. In the target instance, create a new, 1 page dummy application and export it.
  5. Use the SECURITY_GROUP_ID, SET_VERSION, and FLOW_ID from the dummy application in your actual application.
  6. Run my Parameter Diff application (below) and keep it open to use in the next two steps.
  7. Continue editing the application export file to remove any procedures that are not supported in the target instance.
  8. Now find all of the unsupported parameters for existing procedures and delete them.
  9. Test your import.  Repeat steps 7-9 if needed.

To make this process easier, I created an APEX app with all of the API parameters from 3.2, 3.1, 3.0, and 2.2.  You can see it in action here, or download it here.  Keep in mind the supporting objects to 8,000+ inserts, so it may take a minute or two to install.  Happy hacking!!!

Posted in APEX, Application Express, Oracle | Tagged: , , | 9 Comments »

Working on Local JavaScript Files

Posted by Tyler Muth on October 22, 2008

JavaScript can be a nightmare to work with.  Typically, the functions you’re defining are in a separate .js file that’s included in the HEAD of your HTML.  For convenience, I find it easiest to work on a local copy of this file instead of change, ftp to server, refresh page, repeat.  This is especially true in an Application Express (APEX) environment where you would have to upload a new version to the APEX environment via the web interface between each change.  The problem is that most browsers (I tested Firefox, IE 7, Chrome, and Safari) block access to local files called by remote pages.  This is obviously a good thing for security purposes, but it makes it REALLY hard to work with JavaScript.

The solution?  Firefox allows you to add exceptions to this rule on a site-by-site basis.

  1. First, shut down all instances of Firefox (don’t skip this step).
  2. Then, edit or create a user.js file in your profile using this guide. It varies by operating system, but on Vista 64-bit my profile was in “C:\Users\tmuth\AppData\Roaming\Mozilla\Firefox\Profiles“.
  3. Based on this article, add the following lines to your user.js file to allow applications on apex.oracle.com to include local JavaScript files :
    user_pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");
    user_pref("capability.policy.localfilelinks.sites", "http://apex.oracle.com");
    user_pref("capability.policy.policynames", "localfilelinks");
  4. Start Firefox.
  5. Add a link to a local JavaScript file in your application.  Note that there were Path Syntax examples in the “Links to local pages do not work” article.

Now you can edit the file on your local machine, then refresh the web application to reflect those changes.

local-JavaScript

Obviously, once everything is working you’ll need to upload the JavaScript file to the application server (or APEX in this case) and change the reference in the HTML HEAD to point to the version running on the server, not your local machine.

Another solution for smaller functions is to use the Firebug plugin for Firefox, but it just doesn’t work well for really big functions.

Posted in APEX, JavaScript / AJAX | 5 Comments »