Tyler Muth’s Blog

Technology with a focus on Oracle, Application Express and Linux

Posts Tagged ‘APEX’

APEX is 57.4x Faster on Exadata

Posted by Tyler Muth on April 12, 2011

…is just about as useful (and accurate) as saying APEX is just as fast on my laptop as it is on Exadata. However, it makes for a great title. Now that I hopefully have your attention, lets talk about the reasons for this post. As many of you know, APEX used to be my primary specialty but I’ve changed roles a bit over the last year at Oracle and my primary focus is now Exadata. I’ve received an increasing number of questions about APEX on Exadata lately so I thought I’d put together some thoughts on them. Additionally, I’ve had a lot of Exadata questions recently that lead me to the conclusion that it’s not as well understood as I thought, at least within the groups of people I regularly interact with. Contrary to popular belief, it’s not a mythological beast powered by ferry dust and unicorn blood. It’s a very well engineered solution that solves a number of exceptionally challenging problems. In my opinion, the more you understand about it, the more you’ll appreciate it.

Does APEX Work on Exadata?

Yep, Exadata is just Oracle. Let me say that again: It’s just Oracle. Exadata runs an 11.2 Database on Linux x64. It’s the exact same binary install if you download those binaries for “generic” Linux x64 11.2 from OTN. So, if your code / app runs on 11.2, it runs on Exadata. There are a few exceptions to this rule for apps that don’t play well with ASM. So, more accurately, if it runs on 11.2 on Linux x64 with ASM and RAC (optional, but desirable), it runs on Exadata. The APEX Dev Team (my old friends and colleagues) did absolutely nothing to port their code to Exadata. I’ve run a number of customer benchmarks with customer’s data and queries and have yet to make a single change to their queries or structures to make them work on Exadata. The only thing I’ve done is to set indexes invisible in Data Warehouse benchmarks, but that was more for fun…more on that later.

What Problems Does Exadata Solve?

The most obvious problem is solves is the I/O bottleneck. While I’m not in the same league as many of the well known performance architects in the Oracle community, I spend a fair amount of time talking to customers about performance problems. In my estimation, the vast majority of the time the performance problems I encounter come down to I/O. Most DBAs (myself included) don’t know near enough about storage. Compounding that problem is the fact that there are often communication “issues” between storage groups and the DBAs.

Lets talk about how we get to data for second. I’ll try and use a Tom Kyte example. Lets take an example of reading a phone book and assume it has an index (paper style) in the back that tells you which names are on which pages. If you wanted to count all of the names in the phonebook, it would be much quicker to just scan through all of the names from beginning to end than it would to go to the index to find each and every name. This is an example of why full table and partition scans are often faster for data warehouse queries than index-based access paths. Flip that around and tell me the phone number for the “Muth’s”, and it would be much faster to look at the index, find out what page the “Mu”’s start on and go to that page. Think of this as an OLTP style query.

So, I/O throughput is particularly important in DSS and Data Warehousing environments because you’re scanning through whole tables and partitions. However, the vast majority of the customers I talk to are running a SAN attached via 2 or 4 Gbps (gigabit per second) Fibre Channel. That connection provides 200 or 400 MB/s (megabytes per second) respectively. If you want to run a full scan on a 200 GB table and you’re connected via 4 Gb FC, some simple math will tell you it will never run faster than 512 seconds or 8.5 minutes Now, compare that to the disk-only 25 GB/s (gigabytes per second) scan rate of an X2-2 full rack with High Performance disks and we can now scan that table in 8 seconds. Our scan rate jumps to 75 GB/s with Flash Cache, but I’ll stick to disk-only numbers for scan operations. It’s easy to see how a query in this example could run 64x faster on Exadata which supports my claim in the title of this post. The results I’ve obtained in POVs I’ve personally worked on have been incredible. On several occasions I’ve had prove to customers that the results were real, especially in the case of I/O intensive operations.

You don’t need to have a 10 TB database to have I/O problems either. The last POV I ran was on a 500 GB data set (including indexes). The database is a shared resource across many organizations and applications. As they’ve built more applications on this database, their indexing strategy hasn’t exactly kept pace. Their top wait events from statspack are mostly I/O related. Additionally, they copy this database nightly to a mirror server that they use for ad-hoc reporting, which as you might expect has even more I/O issues. It’s hard to index for ad-hoc…

What does this have to do with APEX? I’ve seen (and written) plenty of APEX apps that are front-ends to a data warehouse. If you are working with large data sets, doing a lot of aggregation, or allowing a lot of ad-hoc queries, traditional index strategies are often difficult to develop and of marginal value. In many cases like these, the best strategy for performance is to improve the I/O of your system.

So We Don’t Need Indexes Anymore?

I’ve heard this concept from far too many people not to address it. For Data Warehouse / DSS style queries, it is often the case with Exadata that the optimizer will choose a full table or partition scan access path, and consequently not use indexes. In several Data Warehouse POVs I’ve simply set all indexes invisible without negatively impacting performance since the optimizer wasn’t using them anyway. You still need indexes for primary keys, uniqueness, etc, but I think it’s fair to say that implementing a Data Warehouse on Exadata will allow you to eliminate many (dare I say “most) of your indexes. This allows you to reclaim a lot of disk space and certainly makes data loads faster.

However, this concept has nothing to do with OLTP style queries! If you want to retrieve a row based on it’s primary key, which do you think is faster 1) Traversing a few blocks in a b-tree index to find the rowid, then plucking the single-block that contains your row from disk (or RAM) or 2) scanning the whole multi-gigabyte table? We still need / want indexes for OLTP. Dropping all of the indexes on a busy Exadata-hosted OLTP application would be disastrous. This also applies to the APEX builder itself. Those tables are heavily indexed as other than some of the administrative reports, all of the queries are going after a few specific rows to render your page.

But I Thought Storage Indexes Replaced Traditional Indexes

Uh, no. Storage indexes are often mis-understood. They are unique to Exadata and implemented automatically by the storage cells. I actually like to think of them as anti-indexes. They store the high and low values of number and date columns for a storage unit (~1 MB). As we are scanning through the table or partition, and the query has a predicate that filters on one of these columns, we know we can skip all of the blocks in the storage unit if the value we are looking for falls outside of those high and low values. The key word there is “scanning”. They are used in data warehouse style queries, but not OLTP style queries. Clearly, they are no replacement for traditional indexes in OLTP applications.

Many of My APEX Apps are OLTP, Will They Run Faster on Exadata?

Potentially. Even though those apps shouldn’t be doing a lot of table scans, they will often still incur some physical I/O in the form of single-block reads. If the size of your “hot data and indexes” for all of your applications is larger than the RAM you have allocated to your SGA, you’re going to incur physical I/O. The time difference between reading a block from the buffer cache to reading it from disk dramatic. Buffer cache reads should be in the micro-second range whereas reads from disk can take 10’s of milliseconds. Your single block read times essentially fall off a cliff when you go to disk.

Exadata added Flash Cache in the second (V2) and third (X2-2, X2-8) versions. It’s essentially solid state storage (as cards, not disks) in the storage cells that is configured as a write-through (read) cache. So, if you ask for a block and we don’t find it in the SGA, we go to the storage cells. If it’s in Flash, we can return it from there. If not, we grab the block off disk, then optionally copy it to Flash so the next time we need to read that block from “disk” we can return it from Flash instead. The single-block read times from flash are often under a millisecond. It provides a nice step down in access time between RAM and disk. A full rack has 5.3 TB of Flash Cache, so we can cache A LOT of blocks there. I’ve skipped a lot of details here, but I can tell you from the results of several OLTP POVs that I’ve worked on that it plays a very significant role in OLTP performance. There’s a nice white paper on Flash Cache here (PDF).

APEX is Only for Departmental Applications, Why Would I Run it on Exadata?

Anybody connected to the APEX community know that this has a lot more to do with product positioning than it does actual capabilities. There are plenty of enterprise-class APEX apps there. One great example is the new Oracle Store. APEX is a great front-end to database centric applications. Exadata is a great platform for databases. Enough said.

Posted in APEX, Exadata | Tagged: , , | 6 Comments »

Logger, A PL/SQL Logging and Debugging Utility

Posted by Tyler Muth on November 3, 2009

I’ve been gradually building out a logging utility for PL/SQL over the last year or so. It’s been a huge help on some complicated projects, especially in APEX Applications with a lot of AJAX. I’m sure most people reading this have been stuck on a problem, created a logging table, and sprinkled a few inserts in their code to see what’s going on and when. It’s amazing how much faster you resolve these issues if the table is already there and the inserts are now just a short procedure call. Anyone that’s ever used Firebug probably can’t imagine it without console.log() or console.dir(). Yes, you can use dbms_output.put_line or htp.p, but those fall over pretty quick in a web environment with HTTP Posts or worse, a lot of AJAX Posts / Gets. Additionally, if you’re a Tom Kyte groupie, you’ve probably heard him talk about the value of instrumentation more than once.

Now that you know the “why”, lets talk about what Logger is.  It’s just a couple of tables and a PL/SQL package that make all of this easier for you.  I’ll give you a brief summary of features here, but for full details you should go to the project page at https://logger.samplecode.oracle.com.  So, the super short demo is:

logger@orcl> exec logger.log('hello world');

PL/SQL procedure successfully completed.

logger@orcl> select * from logger_logs_terse;

     ID LOGGER_LEVEL TIME_AGO             TEXT
------- ------------ -------------------- ---------------------
     48           16 35 seconds ago       hello world

Features Include:

  • Easily Enable / Disable all logging
  • Minimal Overhead.  There’s even a NO-OP version of the package that you can use for production if you’re really paranoid. It’s just a stub of the logger package that doesn’t write to or depend on any tables (or any other objects).
  • Automatically purges debug messages older than 7 days.  This is completely configurable
  • Ability to time blocks of code
  • Log sys_context(‘userenv’,”) variables
  • Log APEX Item Names and Values
  • Automatically captures key session information such as module, action, client_identifier including APEX session ID, timestamp, call stack.

I’d like to thank all of the community members that tested and contributed ideas for logger including:

I would love to get more feedback from the community as well.  You can either comment here, or use one of the 3 discussion forums I created for logger listed on the project page just above the download link.

The Fastest Way to Store / Retrieve a Cross Session Variable

Posted in APEX, Oracle, PLSQL | Tagged: , , | 44 Comments »

jApex – A jQuery Plugin for APEX

Posted by Tyler Muth on August 19, 2009

In a previous post I proclaimed that jQuery Selectors Will Change Your Life.  While working on an AJAX centric APEX project, I wanted to use jQuery selectors to post data from an APEX page to an application process so I wrote a jQuery plugin. Over the course of this project the plugin has grown into something that I think the community would find useful.  Essentially it serves the same purpose as Carl Backstrom’s htmldb_get() function, but adds the power of jQuery selectors.  Carl was actually the one that introduced me to jQuery, and I’m sure he would have written something similar, only much better and in half the time.

I think the best way to introduce it is with a few quick examples.  I have a full set of working examples on apex.oracle.com here, but lets start with some simple examples to give you an idea of how it works.

APEX Items

The following code sends every item on an APEX page to the application process “SOME_APP_PROCESS”. It will then popup an alert with the results from the application process. Note that we did not have to specify each item individually, the jQuery selector (documented here) grabs them all.

var options = {
 appProcess: 'SOME_APP_PROCESS',
 pageItems: $(':input'),
 success:
   function(data){
     alert(data);
 }
};

$.jApex.ajax(options);

If we want to send all items inside a div with ID “employeeDiv” and send the results to the Firebug console:

var options = {
 appProcess: 'SOME_APP_PROCESS',
 pageItems: $('#employeeDiv :input'),
 success:
   function(data){
     console.log(data);
 }
};

$.jApex.ajax(options);

Want to send the items P1_NAME, P1_EMAIL, P1_PHONE?

var options = {
 appProcess: 'SOME_APP_PROCESS',
 pageItems: $('#P1_NAME, #P1_EMAIL, #P1_PHONE'),
 success:
   function(data){
   console.log(data);
 }
};

$.jApex.ajax(options);

Other Features

  • You can also send data to the x01-x10 parameters.  There are several examples of this in my sample app.
  • I had already written code to support sending arrays to the f01-f50 parameters used by Tabular Forms.  I really don’t like Tabular Forms and rarely, if ever use them.  However, I figured someone would want / request this functionality so I enhanced it a bit to better support Tabular Forms.  Keep in mind this functionality has not been tested much, so use at your own risk.

Tips

  • $ is the alias for jQuery
  • jQuery plugins almost always have 1 parameter that is object.  Each property of the object is analogous to a traditional JavaScript (or for that matter PL/SQL) parameter.
  • Parameters  can be objects, arrays, scalars or even functions.  Most people are used to scalars, so passing a function in as parameter / property can really be confusing at first.

Download

You can download the jQuery plugin here and the example APEX application here.

Update: I hosted the files in a more permanent location.

Posted in APEX, JavaScript / AJAX | Tagged: , , | 28 Comments »

Octagonal Peg in a Hexagonal Hole (APEX Backward Compatibility)

Posted by Tyler Muth on June 4, 2009

I was going to title this post “Hacking away at an APEX 3.2 app until you can wedge it into a 3.1 instance”, but I shortened it to the current title.  This week I went to import an application I built in APEX 3.2 for an internal customer, only to find that their APEX instance was still on 3.1.  They support a ton of mission critical apps on that instance so upgrading wasn’t an option, no matter how hard I begged.  So, I began the tedious task of editing the export file until I was able to import it.

Just to be clear, this process is NOT SUPPORTED in any way, shape or form by Oracle or myself, so proceed with caution. DON’T EVEN THINK of calling support with issues introduced by this technique. If you start down this path, you are on your own! I’m posting this to try and help some of the more advanced developers in this community and I do not want to introduce a support nightmare for Oracle Support or the APEX Dev Team.

Import Hacking the Slow Way

I started by trying the import in SQL*Plus as it’s much faster and more informative than the web interface for repetitive imports.  The key things that change from version to version of APEX are the procedures and their parameters of WWV_FLOW_API.  Once I edited the SECURITY_GROUP_ID, SET_VERSION, and FLOW_ID (more on these later), I began the Import > Find Error > Edit File > Repeat sequence. One thing that really helps here is a text editor that supports split-screen, synchronized scrolling such as Notepad++ or UltraEdit.  Notepad++ screenshot here.  After a few minutes I came to the conclusion that there has to be a better way.

Import Hacking the Faster Way

I started by querying the ALL_ARGUMENTS view and saving the results in my own table:

create table apex_api_parameters as
	select '3.2' version, object_name,argument_name
	  from all_arguments
	 where package_name='WWV_FLOW_API'
	   and owner = 'APEX_030200'
/

I repeated this step in each database to get the WWV_FLOW_API procedures and parameters for APEX 3.2 and 3.1.  Now I can use SQL set operations to compare them:

select object_name
  from apex_api_parameters
 where version = '3.2'
 minus
select object_name
  from apex_api_parameters
 where version = '3.1'
with common_objects as (
			select object_name
			  from apex_api_parameters
			 where version = '3.2'
		 intersect
			select object_name
			  from apex_api_parameters
			 where version = '3.1')
 select p.object_name,p.argument_name
   from apex_api_parameters p,common_objects c
  where version = '3.2'
	and p.object_name = c.object_name
  minus
 select p.object_name, p.argument_name
   from apex_api_parameters p,common_objects c
  where version = '3.1'
    and p.object_name = c.object_name

Based on the results of the first query, I know the procedures in WWV_FLOW_API that exist in 3.2 but not in 3.1.  These almost always equate to new features.  If I were comparing 3.1 to 3.0 I would see the CREATE_WORKSHEET procedure which equates to interactive reports. The second query shows the parameters to procedures that exist in 3.2 but not in 3.1.

Step by Step Hacking

  1. Make a backup copy of the app you plan to edit.
  2. Using the APEX Application Builder in the newer (source) instance, edit the application to remove any major features that you know don’t exist in the older (target) instance, such as interactive reports.
  3. Export your application from the source instance.
  4. In the target instance, create a new, 1 page dummy application and export it.
  5. Use the SECURITY_GROUP_ID, SET_VERSION, and FLOW_ID from the dummy application in your actual application.
  6. Run my Parameter Diff application (below) and keep it open to use in the next two steps.
  7. Continue editing the application export file to remove any procedures that are not supported in the target instance.
  8. Now find all of the unsupported parameters for existing procedures and delete them.
  9. Test your import.  Repeat steps 7-9 if needed.

To make this process easier, I created an APEX app with all of the API parameters from 3.2, 3.1, 3.0, and 2.2.  You can see it in action here, or download it here.  Keep in mind the supporting objects to 8,000+ inserts, so it may take a minute or two to install.  Happy hacking!!!

Posted in APEX, Application Express, Oracle | Tagged: , , | 9 Comments »

PL/SQL LDAP over SSL – Please Test

Posted by Tyler Muth on January 30, 2009

I’ve been meaning to write this for quite a while now (measured in years), but never got around to it.  None of the DBMS_LDAP code samples support SSL, including the APEX LDAP Authentication Scheme.  Most LDAP directories require SSL for authentication, as you would otherwise send usernames and passwords in clear text.  Depending on your network topography, the risk of someone capturing that data might be very low, but just the same, it should really be SSL. 

One perceived barrier to making this work was my belief that this required an Oracle Wallet which are a bit of a hassle to setup.  There was also some confusion on my part that Oracle Wallets required a license for the Advanced Security Option (ASO), so this would limit the use of this code to only the subset of our customers that have purchased this option.  I’m still looking for the email on this but I believe we confirmed that the Oracle Wallet Manger by itself (such as for DBMS_LDAP or UTL_HTTP) does not require ASO.  Only when you are using the explicitly defined features of ASO such as encrypted column / tablespaces / backups, network encryption, and a few others I can’t remember.  If someone has more info on this, please feel free to comment as I’m too [ busy | lazy | (select adjective from adjectives where id = round(dbms_random.value(1, 100)))] to find it [today | tomorrow | (select sysdate + round(dbms_random.value(1,365)) from dual) ].  When I finally sat down to write this for my current project, I quickly discovered that for SSL Mode 1 (which is what 99.999% of people want), no wallet is required!  Can’t believe I put this off for so long based on a false assumption.

So what does this code do? It can:

  • Bind to an LDAP directory using SSL or non-SSL
  • Authenticate a user using their exact Distinguished Name (DN) such as: cn=tyler.muth, l=amer, dc=mycompany, dc=com
  • Given a starting point (search base) and a username (tyler.muth), search for that user, retrieve their DN, then authenticate.  This handles users in multiple sub-trees.

This is NOT production code! This is NOT production code! This is NOT production code!  Why am I posting it here then?  Because I want people to test it.  I’ve tested it on the project I’m working on, but that doesn’t mean it’s bug free.  I want people who are familiar with PL/SQL and LDAP to review and test my code.  As I receive suggested changes to the code, I’ll update it.  When I feel comfortable that enough people have tested, I’ll blog again and “release it” to the community to use.  My goal is make this as flexible and easy as possible so it just becomes a utility that people can use when they need LDAP.  I can’t make any promises, but if it makes it back into APEX, then you will have helped to improve a product.  If you are not all that familiar with PL/SQL and LDAP, or are looking for a production ready solution, this is probably not for you.  When it’s ready, this blog post will be mostly empty and point to the new location of the code.

I’ve only tested this against OID using an 11.something database, so I’d love to have people test this against Active Directory and any other LDAP directories they work with and at least 10gR2 and XE.  Mark Wilcox is testing it with Oracle Virtual Directory in the next few days as well.  I also ran Wireshark on the database server to capture the communitcation between the database and the LDAP directory.  As expected, in non-ssl mode, my username and password were in the clear.  With SSL enabled, everything was garbage.   Please use Wireshark with caution as it may violate network policies of your organization, get you fired, and accelerate global warming.  It’s a great tool when used ethicly.

I really appreciate the feedback I’ve recieved in the past from this community, especially in pointing out errors with my code.  Thanks in advance for your help.

Customize the ldap_globals package with data from your environment. For most environments, you’ll only need to change g_host and g_search_base.

create or replace package ldap_globals
authid definer
is

	g_host         	constant varchar2(255)   := 'ldap.mycompany.com';
    g_port          constant number          := 636;
    g_search_base   constant varchar2(255)   := 'dc=mycompany,dc=com';
    g_search_filter constant varchar2(255)   := 'cn=';
	g_exact_dn      constant boolean   		 := false;
	g_use_ssl       constant boolean   		 := true;

	-- if g_exact_dn is false, then we will start at the search base and find the user to get their DN.
	--   if true then we will just bind with the DN and you should pass the fully qualified user DN into p_username

	-- SSL is hard-coded to use mode 1, as modes 2 and 3 are highly unlikely.  See the DBMS_LDAP doc for more info.
	-- No wallet is needed for mode 1

end ldap_globals;
/
show errors

There’s no need (that I know of) to customize any code in this function.

create or replace function  ldap_authenticate(
	p_username        in    varchar2,
	p_password        in    varchar2)
return boolean
authid definer
is
	l_retval      pls_integer;
	l_session     dbms_ldap.session;
	l_attrs       dbms_ldap.string_collection;
	l_message     dbms_ldap.message;
	l_entry       dbms_ldap.message;
	l_dn          varchar2(1000);
	l_user_handle dbms_ldap_utl.handle;
	l_return      pls_integer;
	l_user_type   pls_integer;

	l_filter      varchar2(256);

	function do_connect
		return pls_integer
	as
		l_ldap_user   varchar2(256) := NULL;
		l_ldap_pwd    varchar2(256) := NULL;
	begin
		dbms_ldap.use_exception := true;
		l_session := dbms_ldap.init( ldap_globals.g_host , ldap_globals.g_port);

    if ldap_globals.g_use_ssl = true then
      l_retval := DBMS_LDAP.open_ssl(
					ld 				=> l_session,
                    sslwrl			=> NULL, -- wallet location, ie file:/etc/ORACLE/WALLETS/oracle
                    sslwalletpasswd	=> NULL, -- wallet password
                    sslauth			=> 1); -- NO_AUTH :1, ONE_WAY_AUTH: 2, TWO_WAY_AUTH: 3
    end if; 

		return dbms_ldap.simple_bind_s( l_session, l_ldap_user, l_ldap_pwd );
	end;

	procedure do_close
	as
	begin
		l_retval := dbms_ldap.unbind_s( l_session );
	end do_close;

	procedure handle_error(p_message in varchar2)
	as
	begin
		do_close;

		$IF $$logger $THEN
			logger.log_error(p_message);
		$END

		RAISE_APPLICATION_ERROR (-20001,p_message);
	end;

begin
	l_user_type := DBMS_LDAP_UTL.AUTH_SIMPLE;

	l_retval := do_connect;

	if ldap_globals.g_exact_dn = false then

		l_attrs(1)  := 'dn';
    l_filter := '('|| ldap_globals.g_search_filter|| p_username||')';
		l_retval := dbms_ldap.search_s( l_session, ldap_globals.g_search_base, dbms_ldap.scope_subtree, l_filter, l_attrs, 0, l_message );

		l_retval := DBMS_LDAP.count_entries(l_session, l_message);

		if l_retval > 1 then
			handle_error('More than one entry found when searching for users DN.');
		end if;

		if l_retval = 0 then
			handle_error('No entries were found when searching for user DN');
		end if;

		l_entry := dbms_ldap.first_entry( l_session, l_message );
		l_dn := dbms_ldap.get_dn( l_session, l_entry );
	else
		l_dn := p_username;
	end if; -- ldap_globals.g_exact_dn = false

	l_retval := null;
	l_retval := DBMS_LDAP_UTL.create_user_handle(
				user_hd     => l_user_handle,
				user_type   => DBMS_LDAP_UTL.TYPE_DN,
				user_id     => l_dn);

		begin
      dbms_ldap.use_exception := false;
      l_return := null;
			l_return := dbms_ldap.simple_bind_s(
				ld      => l_session,
				dn      => l_dn,
				passwd  => p_password
				);
      if l_return != 0 then
        do_close;
        return false;
      end if;

		exception when others then
			do_close;
			raise;
		end;

    dbms_ldap.use_exception := true;
	do_close;
	if l_return is null then
		handle_error('simple_bind_s did not return a value.');
		return false;
	end if;

	if l_return = 0 then
		return true;
	end if;

	exception when others then
		do_close;
		raise;
		return false;
end ldap_authenticate;
/
show errors

A simple test script. If using SQL Developer, make sure you enable output on the DBMS_OUTPUT tab.

set serveroutput on
declare
	l_return boolean;
begin
	l_return := ldap_authenticate(
					p_username	=> 'first.last',
					p_password	=> '');

	if l_return = true then
		dbms_output.put_line('Authenticated');
	else
		dbms_output.put_line('Fail!');
	end;
end;
/

Posted in Application Express, Oracle, PLSQL, Security | Tagged: , , , | 79 Comments »

jQuery Selectors Will Change Your Life

Posted by Tyler Muth on December 17, 2008

OK, maybe not your life but it will absolutely change the way you code JavaScript. If you haven’t used jQuery yet, here’s the description from the jQuery site:

jQuery is a fast and concise JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid web development. jQuery is designed to change the way that you write JavaScript.

jQuery is technology agnostic, so you can use it in APEX, J2EE, PHP, .Net, Rails, etc. I’ve been using it with APEX for the last few months and there’s a pretty good chance it will actually be included in the 4.0 release of APEX.

Why are selectors so great? Because they completely change the way you work with JavaScript. Traditionally, people add JavaScript such as onClick() events in the code that renders a page. This is easy for hand-coded HTML or other bare-bones technologies such as PHP (when not using a framework). But in the case of frameworks such as APEX, you actually don’t have access to all of the code that rendered the page. Using selectors, you can get very granular access to any element on the page after it is rendered. For example, the following code changes the style of EVERY link on your page:

$(document).ready(function() {
  $("a").css('text-decoration','none');
});

Or maybe you want to add an onClick event to every div that has a class named “clickme”:

$(document).ready(function() {
    $("div[class=clickme]").click(function() {
       alert('Yep, you clicked me');
    });
});

The following code turns any div on the page red when you hover over it, then white when you move your mouse out of it.

$(document).ready(function() {
    $("div").hover(function() {
       $(this).css('background-color','#ff0000');
    },
        function() {
       $(this).css('background-color','#ffffff');
    });
});

Getting a little more APEX specific, here’s a bit of code that prints all values from the “SAL” column out to the Firebug Console (you must have Firebug installed):

// find all TD's with a class of t20data and headers=SAL.  For each of those, run some function.  console.log is Firebug specific.
$("td[class=t20data][headers=SAL]").each(function(){
  console.log($(this).html());
});

Building on the previous example again, lets change the color of the data in a column based on its value:

// find all TD's with a class of t20data and headers=SAL.  For each of those, run some function.  console.log is Firebug specific.
$("td[class=t20data][headers=SAL]").each(function(){
  sal = parseFloat($(this).html());
  if (sal<1000){
     theColor='green';}
  else if (sal >= 1000 && sal < 2500){
     theColor='yellow';
  }
  else{
       theColor='red';}
   $(this).css('color',theColor);
});&#91;/sourcecode&#93;

Lets continue to build on this example by changing the color of every cell in a row based on the value of the SAL column.  Notice that we walk up the DOM to the TR, then down the DOM to every TD in the row.

&#91;sourcecode language='javascript'&#93;
$("td&#91;class=t20data&#93;&#91;headers=SAL&#93;").each(function(){
 sal = parseFloat($(this).html());
 if (sal<1000){
 theColor='green';}
 else if (sal >= 1000 && sal < 2500){
 theColor='yellow';
 }
 else{
 theColor='red';}
 //$(this)  is currently the TD of the sal column.
 // Walk up to its parent TR, then down to all child TDs.  This effectively loops over every TD in a row
 $(this).parent('tr').children('td').each(function(){
 // $(this) is now the curent TD that is a child of the TR
 $(this).css('color',theColor);
 });
});&#91;/sourcecode&#93;

My final example if from a question on the APEX Forum (<a href="http://forums.oracle.com/forums/message.jspa?messageID=3178134" target="_blank">here's the thread</a>) where someone wanted to replace all instances of the APEX Datepicker item with the jQuery UI Datepicker (I posted about this control <a href="https://tylermuth.wordpress.com/2008/07/16/jquery-datepicker-and-apex/" target="_blank">here</a>).  In APEX 3.1 or greater (not 100% which versions) all datepickers are wrapped in '&lt;fieldset class="datepicker"...'.  This code looks for any of those items on page load and replaces them.

$(document).ready(function(){ 
    $('fieldset[class=datepicker]').find('img').css('display','none');

    $('fieldset[class=datepicker]').find('input[type=text]').datepicker({ 
             showOn: "both", 
             buttonImage: "http://ui.jquery.com/repository/demos_templates/images/calendar.gif", 
             buttonImageOnly: true}); 
}); 

Luis Cabrel pointed out that previous versions of APEX did not have the fieldset code and posted his solution in that same thread:

$(document).ready(function(){
    $('img[title=Calendar]').hide();

    $('img[title=Calendar]').parent().parent().
            find('input[type=text]').datepicker({
            closeAtTop: true,
            dateFormat: "dd-M-yy",
            showOn: "both",
            buttonImageOnly: false,
            duration: 'fast',
            firstDay: 1});
            });

Imagine how much time that would save you if you had even a couple of applications with 10-15 datepickers per app.

The Firefox plugin “Firebug” makes it so much easier to test and develop selectors since you can use HTML > Inspect to find the DOM nodes you want to select, then use the console to test them. console.log() is also infinitely easier to use for debugging than alert() since you don’t have to click “OK” for each event.

I simply can’t stress enough how useful jQuery selectors are. I strongly encourage everyone to take some time to explore them. There is a great tutorial called “Getting Started With jQuery” that goes into more detail. If you have other cool jQuery solutions with APEX, I’d love to see them!

Posted in Application Express, JavaScript / AJAX | Tagged: , , | 19 Comments »

Call BI Publisher Web Services from APEX

Posted by Tyler Muth on March 31, 2008

Integration between Application Express and BI Publisher is primarily focused on delivering high fidelity reports to the browser, such as clicking a “Print” link on an APEX report and getting back a PDF version of that report in your browser. The configuration and architecture of this integration is documented here. However, there have been a number of questions on the APEX forum and from customers I’ve presented to that are not answered by this solution. So, with the knowledge that BI Publisher 10.1.3.3.2 introduced Web Service APIs to run reports (documented here), and Application Express 3.0 introduced the ability to call Web Services, I thought there might be another integration point between these two products.

This technique is completely different from the traditional integration, since the database where the report resides can be completely different from the database where APEX is installed. We’re going to configure a database connection in BI Publisher then build a report using that connection. APEX is essentially going to pass some parameters to that report and doesn’t need access to the data at all. Since we’re defining the data source in BI Publisher, the report data can actually come from any JDBC compliant database (Oracle, SQL Server, MySQL, etc), a Web Service, or even File data sources.

In summary:

  • Select zero or more parameters on an APEX Page
  • Press Submit
  • This calls an APEX Process of type “Web Service Reference”
  • The Web Service Call instructs BI Publisher to run a report with the given parameters
  • Then deliver it to one or more email address and an FTP server (could also be WebDav, Fax, or a Network Printer)

Software used in the post:

Please make sure all of the following software is installed and configured before starting. You can obviously omit either the Email server or the FTP server if you only care about 1 delivery end point. Make sure you test the Email and / or FTP servers with their respective clients before you try delivering reports to them with BI Publisher.

Create a Report in BI Publisher

If you are not familiar with BI Publisher, please take the time to go through this Oracle By Example (OBE). The OBE will walk you through all of the steps necessary to build the report in BI Publisher. Once you are familiar with BIP:

  1. Create a report based on HR.DEPARTMENTS and HR.EMPLOYEES. This report has a parameter that allows you to select which department you want to view and uses an RTF template to format the output.
  2. Configure your SMTP and FTP servers in the Admin interface
  3. Click on the “Schedule” link for the report and test FTP delivery first as its less prone to errors. Then test email delivery. Make sure these work from the schedule interface before proceeding. If they don’t work here, there’s no chance they’ll work when called from a Web Service API.
  4. Make note of the URL of your report as you’ll use this later in the Web Service API.

Here are 2 screen-shots of my report:

report edit

report view

Test The Web Service Call in soapUI

Now that we know we can schedule a report to be delivered via FTP and Email, lets test the BI Publisher Web Service Call to this report. Start by creating a new WSDL Project in soapUI. From the BI Publisher Web Service API Documentation, use the following URL for the WSDL:

http://<host>:<port>/xmlpserver/services/PublicReportService?wsdl

Create a New Request under PublicReportService > scheduleReport. Lets start with a simple request to deliver our report to FTP with no parameters:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <pub:scheduleReport xmlns:pub="http://xmlns.oracle.com/oxp/service/PublicReportService">
         <scheduleRequest>
            <deliveryRequest>
               <ftpOption>
                  <ftpServerName>tmuth-ftp</ftpServerName>
                  <ftpUserName>tyler</ftpUserName>
                  <ftpUserPassword>oracle1</ftpUserPassword>
                  <remoteFile>/employees.pdf</remoteFile>
               </ftpOption>
            </deliveryRequest>
            <notificationTo>tyler@tmuth-lap</notificationTo>
            <notifyWhenFailed>true</notifyWhenFailed>
            <reportRequest>
               <attributeFormat>pdf</attributeFormat>
               <reportAbsolutePath>/~tmuth/employees/employees.xdo</reportAbsolutePath>
            </reportRequest>
            <userJobName>employees</userJobName>
         </scheduleRequest>
         <userID>tmuth</userID>
         <password>oracle1</password>
      </pub:scheduleReport>
   </soapenv:Body>
</soapenv:Envelope>

Some things to note about the code above:

  • Line 7: “tmuth-ftp” isn’t the actual server name, it’s the name I gave it in the BIP Admin interface when I added the FTP server.
  • Line 10: Don’t forget the leading slash
  • Line 17: This is the URL I noted when I ran my report. The BIP interface will often escape the tilde with a percent seven E, but you can just use the tilde here.
  • Lines 21 and 22: This is the username and password I used to login to the BIP Web Interface.

Once that is working, you can try adding a parameter to narrow it to just the sales department. Add the following code at the end of the “reportRequest” section between lines 17 and 18:

<parameterNameValues>
  <item>
     <name>DEPARTMENT</name>
     <multiValuesAllowed>false</multiValuesAllowed>
     <values>
        <!--  80 = SALES, -1 = All -->
        <item>80</item>
     </values>
  </item>
</parameterNameValues>

Now, lets add email as a second delivery end-point. Add the following code to the “deliveryRequest” section between lines 11 and 12:

<emailOption>
  <emailBody>webservice test</emailBody>
  <emailFrom>bip@localhost</emailFrom>
  <emailSubject>test</emailSubject>
  <emailTo>tyler@tmuth-lap</emailTo>
</emailOption>

Create an APEX Application to Call The Report

  1. Create a new APEX application with one blank page (Page 1).
  2. Create a new “Web Service Reference” (Application > Shared Components > Logic)
    1. Do not search a UDDI repository.
    2. Create the reference manually (link on the right side of the page).
    3. Enter the WSDL you used for soapUI
    4. Enter the SOAP envelope at the bottom of this section (make sure you change the values to match your configuration).
  3. Create an Item on Page 1 of type “Select List”. The LOV should be a query of department_name,department_id from hr.departments. The NULL value is -1.
  4. Create a Button labeled “Deliver Report”. Also have the page branch to itself.
  5. Create a Process (on-submit):
    1. Type: Web Service
    2. Select the Web Service Reference you created earlier.
    3. Only fire this process when you press the Deliver Report button.
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <pub:scheduleReport xmlns:pub="http://xmlns.oracle.com/oxp/service/PublicReportService">
         <scheduleRequest>
            <deliveryRequest>
               <ftpOption>
                  <ftpServerName>tmuth-ftp</ftpServerName>
                  <ftpUserName>tyler</ftpUserName>
                  <ftpUserPassword>oracle1</ftpUserPassword>
                  <remoteFile>/employees.pdf</remoteFile>
               </ftpOption>
               <emailOption>
                  <emailBody>webservice test</emailBody>
                  <emailFrom>bip@localhost</emailFrom>
                  <emailSubject>test</emailSubject>
                  <emailTo>tyler@tmuth-lap</emailTo>
               </emailOption>
            </deliveryRequest>
            <notificationTo>tyler@tmuth-lap</notificationTo>
            <notifyWhenFailed>true</notifyWhenFailed>
            <reportRequest>
               <attributeFormat>pdf</attributeFormat>
               <reportAbsolutePath>/~tmuth/employees/employees.xdo</reportAbsolutePath>
               <parameterNameValues>
                  <item>
                     <name>DEPARTMENT</name>
                     <multiValuesAllowed>false</multiValuesAllowed>
                     <values>
                        <item>#P1_DEPARTMENT#</item>
                     </values>
                  </item>
               </parameterNameValues>
            </reportRequest>
            <userJobName>employees</userJobName>
         </scheduleRequest>
         <userID>tmuth</userID>
         <password>oracle1</password>
      </pub:scheduleReport>
   </soapenv:Body>
</soapenv:Envelope>

Notice that we reference APEX Items in Web Service Requests using #ITEM# notation.You can now test your application by selecting a department then submitting the page.apex_app.png

Use Cases

  • Long running reports, asynchronously delivered via email
  • Print bar codes via network attached printer
  • Publishing a report in HTML or PDF format to a web site via FTP

Downloads

Click here to download the APEX Application, BI Publisher Report, and PDF output of the BIP Report. Note that in the APEX application, the P1_DEPARTMENT item queries HR.DEPARTMENTS. employees.zip is the BIP Report which you simply want to upload to BI Publisher “as-is” without unzipping.

Posted in APEX, BI Publisher | Tagged: , , | 48 Comments »