We could divide the answers into a number of areas.
(Question being: How do I speed up INSERTs?)
Paul hit an important idea:
What's the business context?
I see two obvious contexts:
(Here we want lower latency)
Batch job INSERTs?
(Here we want higher throughput)
Some Generic answers:
-Fewer competing indexes
-Fewer competing constraints
-Avoid Active Record
-Partitioning(AWESOME feature which can also speed up all other DDL and DML)
-Fewer INSERTs to speed up throughput (debatable?)
-More INSERTs to maybe lower latency (debatable?)
-Disable/Delay competing writes from journaling ("archiving" in Oracle-speak)
-Disable/Delay competing reads and writes from the application (if the
business can handle that behavior)
Areas to explore:
-INSERT into a separate DB (Paul's points come into play here)
-Throw CPU at it and compress before INSERT
-Throw money at it and buy "enterprise" scale hardware (fibre-channel
cards and storage arrays loaded with cache)
-Throw money at it and buy Oracle EE which has many features which
speed up both INSERT scenarios
Also Oracle EE has a feature called the Resource Manager which is also
seen in operating systems.
You could configure Oracle Resource Manager to give more resources to
sessions or jobs which you the developer have flagged as INSERT-heavy.
I gave a short preso on Oracle Resource Manager and how to talk to it
with the Ruby OCI8 adapter about 2 years ago at the SV-Ruby Meetup:
On 6/9/10, ambert ho <[address removed]> wrote:
> So Dan,
> Gonna post the real answers? I wanna know since I'm putting stuff into
> Heroku and my noobness didn't know about the stronger type enforcement in
> On Sun, May 23, 2010 at 9:29 PM, ambert ho <[address removed]> wrote:
>> Q1: Flattening out by using more columns, and putting more data in each
>> column, increases insertion performance since you don't have to insert
>> multiple tables for things that you would have split up in 1st, 2nd, 3rd
>> normal form. But depending on your schema, denormalizing also means
>> redundant inserts if multiple tables reference the table that you just got
>> rid of. So if the database isn't sharded, I'm not sure how much gain you
>> if you get rid of an extra insert, only have to insert twice.
>> Caveat with having lots of columns is if you collapse several tables into
>> one but end up needing to index multiple columns in the same table. Slows
>> down insertion since the db has to calculate where an object goes on the
>> b-tree every time there's an insert.
>> Q2: Dunno too much about this, someone educate me! I believe PostgreSQL
>> Oracle have journaling and MySql doesn't, I dunno
>> On Fri, May 21, 2010 at 12:39 PM, DAN BIKLE <[address removed]> wrote:
>>> What denormalization techniques can we use at the database layer to
>>> speed up INSERTS.
>>> Compare/Contrast features and behaviors of MySQL, PostgreSQL, (and
>>> Oracle if you want) which are interesting to the Rails developer and
>>> Rails site operations dept.
>>> [address removed]
>>> Please Note: If you hit "REPLY", your message will be sent to everyone on
>>> this mailing list ([address removed])
>>> This message was sent by DAN BIKLE ([address removed]) from Silicon Valley
>>> Ruby Meetup.
>>> To learn more about DAN BIKLE, visit his/her member profile:
>>> To unsubscribe or to update your mailing list settings, click here:
>>> Meetup, PO Box 4668 #37895 New York, New York[masked] |
>>> [address removed]