I’ve been getting in and out of rails development for some time now… and my biggest problem is finding documentation for simple things (I guess that people already know all the stuff, but I’m still getting up to speed).
So I decided to blog about one of those simple things… in case there is still someone who has not mastered Ruby on Rails.
When several models are saved in a single transaction in Rails, the usual case is that you want to rollback the transaction if any of them fails. This is done automatically if an exception is thrown.
However, you usually also want to display validation errors and not show the full rails trace to the user.
What the example does not show is how to accomplish this. Two options here:
- use save (no exclamation mark) and check the return value. If any of the saves return false, raise an ActiveRecord::RollbackException after your render or redirect.
- rescue from ActiveRecord::RecordInvalid and render or redirect there.
I think #2 is more elegant… but since I knew about it too late, my code uses #1.
begin transaction do first.save! second.save! third.save! fourth.save! end rescue ActiveRecord::RecordInvalid => invalid # do whatever you wish to warn the user, or log something end
Example is from this other blog post. I wish I’ve read it sooner!
Sometimes you spend time developing and testing your new bundle using the runtime workbench launched from the same eclipse, and everything is fine… until some user installs that bundle and finds that it does… nothing!
The OSGi console is a great help in such cases. The `diag` command can tell you what’s wrong. Usually it is a dependency that is missing.
Start by launching eclipse from the command line with the
-console command line option.
diag your.bundle.id, and you’ll see eclipse telling you what’s wrong.
Some weeks ago I wrote down some notes for making Oracle work harder and faster with hibernate.
Those notes were collected from several places on the Internet and are supposed to help.
But they didn’t. Not for us.
Good news is that we found the problem of the bad performance and fixed it… it was all caused by foreign key integrity checks.
We got our first clue when the sysadmin detected a lot (say, 12 or so) open cursors for a simple update sentence.
We were using defaults for most of the hibernate settings and even when the update was intended to only change one column, the sql sentence set all the fields in the table for the affected row.
And Oracle fired all the checks.
I’m not sure why Oracle does not optimize this by first checking if the value has changed (if it hasn’t, then the constraints are forcedly valid), but the solution was simple… don’t update more than you need.
I have a new friend and it is called
@org.hibernate.annotations.Entity(dynamicUpdate=true). There is some (extremely brief) documentation on the hibernate annotations reference and javadocs. Of course you can also use it in the
hbm files if XML is your thing.
Just in case you did not guess it, this only updates dirty properties of your objects (i.e. those that you updated after retrieving it from the database).
This has potential caveats if another transaction somehow updates your object, since the database state will be different that what you expect. To the best of my understanding this can only happen with detached objects in any reasonable isolation level… and you should reload the state from database in that case.
Well, so that was it… too many constraints on a table and updating more columns than needed. Updating only the affected columns increased performance to where we expected: better than the mysql-based prototype.
I’ve collected the following bits of information regarding the tunning of Oracle when used with Hibernate performance… it might help someone (and I need to write it down somewhere I won’t loose when moving from a desk to another!).
The following properties should be set:
# See http://martijndashorst.com/blog/2006/11/28/hibernate-31-something-performance-problems-contd/
# NOTE: See http://opensource.atlassian.com/projects/hibernate/browse/HHH-3359
hibernate.jdbc.wrap_result_sets = true
# See http://www.hibernate.org/120.html#A10
hibernate.dbcp.ps.maxIdle = 0
hibernate.c3p0.max_statements = 0
# Everything else comes from http://docs.codehaus.org/display/TRAILS/DatabaseConfigurations
# The Oracle JDBC driver doesn't like prepared statement caching very much.
# or batching with BLOBs very much.
I have not tested the performance difference… just collected the information.
Let me know if you know more tricks!
Update: Added a warning about a memory leak in current hibernate, thanks to dfernandez.
Update 2: Statement caching for Oracle can be enabled directly on the datasource implementation. See this article.