What is the relation between Sybase spid and SQL query - sybase-ase

As I understand from this question my Sybase ASE database connection has its own SPID. My question is: are complex queries with nested subselects executed by that single SPID? Or does Sybase spawn other SPID's to execute complex queries?

If parallel processing is enabled, it is possible for spids to spawn other processes. This could occur in large complex queries if the optimizer chooses parallel execution, reorgs and other similar database actions.
If this occurs, then the newly spawned spid will have the parent spid in thefid(Family ID) column ofmaster..sysprocesses, or in the out put ofsp_who`
More information on Parallel Queries can be found in the documentation.

Related

Performance in PDO / PHP / MySQL: transaction versus direct execution

I am looping through a number of values (1 to 100 for example) and executing a prepared statement inside the loop.
Is there and advantage to using a transaction - committing after the loop ends - compared to a direct execution inside the loop?
The values are not dependant on each other so a transaction is not needed from that point of view.
If your queries are INSERTs, the page 7.2.19. Speed of INSERT Statements of the MySQL manual gives two interesting informations, depending on whether your are using a transactionnal engine or not :
When using a non-transactionnal engine :
To speed up INSERT operations that are
performed with multiple statements for
nontransactional tables, lock your
tables.
This benefits performance because the
index buffer is flushed to disk only
once, after all INSERT statements have
completed. Normally, there would be as
many index buffer flushes as there are
INSERT statements. Explicit locking
statements are not needed if you can
insert all rows with a single INSERT.
And, with a transactionnal engine :
To obtain faster insertions for
transactional tables, you should use
START TRANSACTION and COMMIT instead
of LOCK TABLES.
So I am guessing using transactions might be a good idea -- but I suppose that could depend on the load on your server, and whether there are multiple uses using the same table at the same moment, and all that...
There are more informations on the page I linked to, so don't hesitate to read it ;-)
And, if you are doing update statements :
Another way to get fast updates is to
delay updates and then do many updates
in a row later. Performing multiple
updates together is much quicker than
doing one at a time if you lock the
table.
So, I'm guessing the same can be said than for inserts.
BTW : to be sure, you can try both solutions, benchmarking them with microtime, on the PHP side, for instance ;-)
For a faster time you could do all the inserts in one shot, or group them together, perhaps 5 or 10 at a time, as if one insert fails the entire batch will.
http://www.desilva.biz/mysql/insert.html
A transaction will slow you down, so if you don't need it then don't use it.
A prepared statement would be a good choice though even if you did batch inserts, as you don't have to keep building up the query each time.
I faced the same question when I had to implement a CSV file (possibly quite long) data import (I know you can use the LOAD DATA INFILE syntax for that but I had to apply some processing on my fields before insertion).
So I made an experiment with transactions and a file with about 15k rows. The result is that if I insert all records inside one unique transaction, it takes only a few seconds and it's cpu bound. If I don't use any transaction at all, it takes several minutes and it's IO bounded.
By committing every N rows, I got intermediate results.

Can the following query may cause a deadlock for concurrent transactions?

Executing this query may cause deadlock ? If yes, then please explain how??
$q="UPDATE SET `count` =`count` + 1 WHERE user_id='$uid' FOR UPDATE";
It will not cause a deadlock. Even if a lot of queries try to update at the same time, they will either wait for the other query to finish the update. Or Mysql optimiser will run them simultaneously if different queries are updating different rows, given that you are using InnoDB engine. In MyISAM, there is only table-level locking so the queries will end up running sequentially even if they are run at the same time.
I do not see why there will be a deadlock with this query.

How to clear transaction memory using either SQL or PHP PDO

I have an application that is live in a multiuser environment.
I have a problem when two or more people call SQL stored procs at the same time (SELECT and INSERT).
The memory on the server maxes out and a deadlock error is returned:
'(0=> 4000 1=>1205 2=> Transaction was deadlocked on lock resources with another
process and has been chosen as the deadlock victim)
I'm positive that this is due to the memory not being cleared after each transaction.
My code looks like this:
BEGIN TRANSACTION
UPDATE table ...
SELECT column FROM table_2 ...
EXEC dbo.stored_proc
COMMIT
The stored procedures themselves do not have COMMIT.
I've read about mssql_free_statement() but I need either a SQL or PHP PDO alternative.
Also, is my diagnosis correct or could there be something else causing the deadlock?

Multithreading in Oracle

I am working on Oracle 11g. I have a table which stores around 100 records. Two columns of importance to this question are:
ID
SQL
The SQL column contains Dynamic SQL query which needs to be executed. This Dynamic SQL will be updating a single table.
How can I use DBMS_SCHEDULER to execute the dynamic SQLs stored in SQL column in parallel (multi-threading) for say, 10 rows at at time. I do not want to execute all threads in parallel (since the number of records in this table can go upto 1000).
In case I am not clear enough with the problem statement, do let me know.
Please suggest!
You could either execute a series of jobs which each accept an ID and then process the associated SQL, which would be flexible, or you could use a scheduler chain in which you define a chain with ten steps, each of which executes on of the SQL's, with rules to start all the steps at the start of the chain.

MySQL Database lockup

This is rather a general question. If I run an SQL query and loop through the results. Would the database be locked whilst I was looping through these results and prevent further queries / inserts?
Also, if I was to send 5 or 6 insert statements to the database at the same time (via different calls), would there be a lockup?
I am having an issue where some of the logs I am meant to be inserting into the database are not there so I wanted to investigate this route.
I am using PHP 5 and lastest MySQL (can't remember version).
Thanks.
There is a difference between "lock" and "corruption"
Database Lock is something which database does to prevent data corruption. When two simultaneous DML (insert / update / delete) queries are encountered by the database, it will lock the related table(s). Now, there could be "row-level" locking OR "table-level" locks
What happens in the locking is that the table is locked and all the subsequent queries are queued until the current query is executed by the database. If its a row level locking, depending upon the database multiple updates are allowed simultaneously.
SELECT queries:
When you loop through your result set, there are no more calls being made to the database (while in the loop) . The result is already generated. Thus, it will not affect the database
It can depending on the engine. MyISAM does table locking whereas InnoDB does row locking.
http://dev.mysql.com/doc/refman/5.0/en/table-locking.html

Resources