Check for Insert Failure Due to Page Lock - sybase-ase

I freely admit that I know nothing about sybase in terms of its return codes. My experience has primarily been with Oracle and SQL Server. This particular project requires an insert into a binary field of a table that periodically fails because the entry is locked. Looking at the code it doesn't appear that I am able to successfully detect a lock condition. My current strategy is to insert the data, then select and determine if the insert was successful, and retry if it was not using threads that sleep for several seconds in between retry attempts. This fails to account for other data that may have altered the entry prior to my original data insert and may be more current than the data that I am attempting to insert. Is there a simple way to determine if the row is locked before attempting an insert, wait for the lock to clear, then lock the row myself before an insert? Alternately, if I can detect that the entry is locked, then I can fail the transaction and alert the user to the failure so that it can be manually inspected. Before anyone asks, I am unable to change the architecture of the RDMS in terms of how it is set up to lock entries. This has to be handled by the code that performs the insert.

Locking the entire table will work, but is pretty crude if you're only looking for smaller granularity of a page (as per the title of your question).
You can actually do that by doing SET LOCK NOWAIT before the INSERT and then checking ##ERROR for status code 12205 which indicates there was a lock on something that was needed in order to do the insert. Don't forget to run SET LOCK WAIT to restore the default or NOWAIT will apply to the rest of your session.

Try:
BEGIN TRANSACTION
LOCK TABLE <<table_name>>
IN EXCLUSIVE MODE NOWAIT
IF ##error != 0
BEGIN
ROLLBACK TRANSACTION
PRINT 'COULD NOT ACQUIRE LOCK. EXITING ...'
RETURN 0
END
<< your code here if it was able to lock >>
COMMIT TRANSACTION

Related

Mysql: How to lock a whole table during transaction?

I have a transaction like this (innoDB):
START TRANSACTION;
SELECT 1 FROM test WHERE id > 5; // Let's assume this returns 0 rows
// Some very long operation here
//If the previous SELECT contained 0 results, this Insert will be executed
INSERT INTO test VALUES...;
COMMIT;
Now the problem is that if more sessions execute at the same time, then they will all end up executing the INSERT, because by the time the long task in those sessions has finished, all of the sessions had plenty of time to do the SELECT, and it will return 0 row result for all of them, since the INSERT haven't been executed quite yet due to the long task running.
So basically, I need to somehow lock the whole table test (so it can't be read by other sessions and they will be forced to wait) after I execute START TRANSACTION, but I am not sure how, because I can't use the LOCK TABLES test query, because that COMMITs the transaction I have started.
I also cannot use SELECT .. FOR UPDATE, because that only prevents existing rows from being modified, but it won't prevent new rows from being inserted.
If you've got some long-running task which only needs to be run once, set a flag in a separate table to say that the task has started, and check that instead of the number of rows written to the target table, so that another instance of the service does not kick off the job twice.
This also has the advantage that you're not relying on the specifics of the task in order to know the status of it (and therefore if the nature of the task changes, the integrity of your other code doesn't collapse), and you're not trying to work round the transaction by doing something horrible like locking an entire table. One of the points of using transactions is that it's not necessary to lock everything (although of course different isolation levels can be used, but that's not the subject of this discussion).
Then set the flag back to false when the last bit of the task has finished.

Controlling read locks on table for multithreaded plsql execution

I have a driver table with a flag that determines whether that record has been processed or not. I have a stored procedure that reads the table, picks a record up using a cursor, does some stuff (inserts into another table) and then updates the flag on the record to say it's been processed. I'd like to be able to execute the SP multiple times to increase processing.
Obvious answer seemed to be to use 'for update skip locked' in the select for the cursor but it seems this means I cannot commit within the loop (to update the processed flag and commit my inserts) without getting the fetch out of sequence error.
Googling tells me Oracle's AQ is the answer but for the time being this option is not available to me.
Other suggestions? This must be a pretty common request but I've been unable to find anything that useful.
TIA!
A

PHP - MySQL Row level locking example

I've seen many posts explaining the usage of Select FOR UPDATE and how to lock a row, however I haven't been able to find any that explain what occurs when the code tries to read a row that's locked.
For instance. Say I use the following:
$con->autocommit(FALSE);
$ps = $con->prepare( "SELECT 1 FROM event WHERE row_id = 100 FOR UPDATE");
$ps->execute();
...
//do something if lock successful
...
$mysqli->commit();
In this case, how do I determine if my lock was successful? What is the best way to handle a scenario when the row is locked already?
Sorry if this is described somewhere, but all I seem to find are the 'happy path' explanations out there.
In this case, how do I determine if my lock was successful? What is the best way to handle a scenario when the row is locked already?
If the row you are trying to lock is already locked - the mysql server will not return any response for this row. It will wait², until the locking transaction is either commited or rolled back.
(Obviously: if the row has been deleted already, your SELECT will return an empty result set and not lock anything)
After that, it will return the latest value, commited by the transaction that was holding the lock.
Regular Select Statements will not care about the lock and return the current value, ignoring that there is a uncommited change.
So, in other words: your code will only be executed WHEN the lock is successfull. (Otherwhise waiting² until the prior lock is released)
Note, that using FOR UPDATE will also block any transactional SELECTS for the time beeing locked - If you do not want this, you should use LOCK IN SHARE MODE instead. This would allow transactional selects to proceed with the current value, while just blocking any update or delete statement.
² the query will return an error, after the time defined with innodb_lock_wait_timeout http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_lock_wait_timeout
It then will return ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
In other words: That's the point where your attempt to acquire a lock fails.
Sidenode: This kind of locking is just suitable to ensure data-integrity. (I.e. that no referenced row is deleted while you are inserting something that references this row).
Once the lock is released any blocked (or better call it delayed) delete statement will be executed, maybe deleting the row you just inserted due to Cascading on the row on which you just held the lock to ensure integrity.
If you want to create a system to avoid 2 users modifying the same data at the same time, you should do this at an application level and look at pessimistic vs optimistic locking approches, because it is no good idea to keep transactions running for a long period of time. (I think in PHP your database connections are automatically closed after each request anyway, causing an implicit commit on any running transaction)

Check if transaction on a innoDB row is occurring?

If a database transaction is occurring on one thread is there a way for other threads to check to see if this transaction is already occurring before attempting the transaction? I know innoDB has row-level locking but I want the transaction to not be attempted if its already occurring on another thread, instead of waiting for the lock to be released and then attempting it.
To make my question clearer, an explanation of what I am trying to do may help:
I am creating a simple raffle using php and a innoDB table with MySQL. When a user loads the page to view the raffle it checks the raffle's database row to see if its scheduled end time has passed and if its "processed" column in the database is true or false.
If the raffle needs to be processed it will begin a database transaction which takes about 5 seconds before being committed and marked as "processed" in the database.
If multiple users load the page at around the same time I feel that it will process the raffle more than once which is not what I want. Ideally it would only attempt to process the raffle if no other threads are processing it, otherwise it would do nothing.
How would I go about doing this? Thanks.
You could implement table level locking and handle any subsequent connections to either be run in a queue or fail quietly:
https://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
From the MySQL docs:
SET autocommit=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
... do something with tables t1 and t2 here ...
COMMIT;
UNLOCK TABLES;

Auction Bid - Will SELECT LOCK IN SHARE MODE keep information at its most recent?

I am currently looking into how I can manage a high number of bids on my auction site project. As it is quite possible that some people may send bids at exactly the same time it has become apparent that I need to ensure that there are locks to prevent any data corruption.
I have come down to using SELECT LOCK IN SHARE MODE which states that If any of these rows were changed by another transaction that has not yet committed, your query waits until that transaction ends and then uses the latest values.
http://dev.mysql.com/doc/refman/5.1/en/innodb-locking-reads.html
This suggests to me that the bids will enter a queue where each bid is dealt with and checked to ensure that the bid is higher than the current bid and if there are changes since an insert is put in this queue then the latest bid amount is used.
However, I have read that there can be damaging deadlock issues where two users try to place bids at the same time and no query can maintain a lock. Therefore I have also considered using SELECT FOR UPDATE but this will then also disable any reads which i am quite unsure about.
If anybody could shed any light on this issue that would be appreciated, if you could suggest any other database like NoSQL which would be more suitable then that would be very helpful!!!
EDIT: This is essentially a concurrency problem where i don't want to be checking the current bid with incorrect/old data which would therefore produce a 'lost update' on certain bids.
By itself, two simultaneous updates will not cause a deadlock, just transient blocking. Let's call them Bid A and Bid B.
Although we're considering them simultaneous, one will acquire a lock first. We'll say that A gets there 1 ms faster.
A acquires a lock on the row in question. B has it's lock request go in queue and must wait for the lock belonging to A to be released. As soon as lock A is released, B acquires it's lock.
There may be more to your code but from your question, and as I've described it, there is no deadlock scenario. In order to deadlock, A must be waiting for B to release it's lock on another resource but B will not release it's lock until it acquires a lock on A's resource.
If you need to validate the bid in real time you can either:
A. Use the appropriate transaction isolation level (repeatable read, probably, which is the default in InnoDB) and perform both your select and update in an explicit transaction.
BEGIN TRAN
SELECT ... FOR UPDATE
IF ...
UPDATE ...
COMMIT
B. Perform your check logic in your Update statement itself. In other words, construct your UPDATE query so that it will only affect rows when the current bid is less than the new bid. If no records were affected, the bid was too low. This is a possible approach and reduces work on the DB but has it's own considerations.
UPDATE ...
WHERE currentBid < newBid
Personally my vote would be to opt for A because I don't know how complex your logic is.
A repeatable read isolation level will ensure that a every time you read a given record in a transaction, the value is guaranteed to be the same. It does this by holding a lock on the row which prevents others from updating the given row until your transaction either commits or rolls back. One connection cannot update your table until the last has completed it's transaction.
The bottom line is your select/update will be atomic in your DB so you don't have to worry about lost updates.
Regarding concurrency, the key there is to keep your transactions as short as possible. Get in, get out. By default you can't read a record that is being updated because it is in an indeterminate state. These updates and reads should be taking small fractions of a second.

Resources