Sequence number Equivalent in sybase ase - sybase-ase

I have an existing sybase ase table which is using IDENTITY as its primary key. Now i need to recreate this table but i want to start the PK from next value of IDENTITY PK in prod env. e.g. If currently PK = 231 then after re-creating i want it to start from 232 onwards or any other INTEGER value > 231.
In oracle its easy to configure a sequence number and we can give start with but in sybase ase as we dont have sequence available so i tried using newid() function but it gives binary(16) values whereas i want integer values.
Can anyone suggest something ?

I am planning to use something like mentioned below and i think it will resolve my problem. Let me know if anyone has a better solution.
select abs(hextoint(newid()))
Any thoughts on this solution ? Can this ever generate the same number which it generated already?

select next_identity('tablename') will return the identity value of the next insert for a table with an identity column so you know which ID will be allocated next.
Select ##identity immediately after an insert will return the ID which was just given to the row inserted.
However you need to be careful as identity columns are not the same as sequences and should not be relied upon if you want a sequence with no gaps because you will get a gap (albeit small sometimes) if the database crashes or is shutdown with nowait. For these a number fountain/insert trigger type generation of IDs is a better option. Using 'identity insert' is only really for when you want to bulk-load a whole table - you should not be setting that with every insert or you will defeat the whole purpose of an identity column, which is fast generation of new key values.

Related

DMax+1 not working

I have an Access interface which connects to an Oracle database. I'm currently working on a way to automate updating an Oracle table with a button click. The table I am updating has a unique field which is VARCHAR, but is using consecutive numbers.
In doing research, I found the following function to find the last number used then increment by 1:
Dmax("VAL([INFORMATION_ITEM_ID])", "[EAUSER_INFORMATION_ITEM]") + 1
This does successfully find the last number used and adds one, but it only does it for the first record. When I run the INSERT statement, the first record is added before an inconsistency error is thrown. What it is doing is using the same ID it used in the first record for the remaining records.
What else do I need to do in order to make this work?
Thanks!
Adding by request, here is the structure of the INFORMATION_ITEM_ID:
INFO_ITEM_ID
1000
1001
1002
1003
...
This is a Varchar field even though only numbers are being used (don't ask me why--not my idea and I have no say in the matter). I want to increment as records are added (1004, 1005, etc.).
I came up with a work-around, which is rather convoluted, but takes care of the issue. I think the problem lies in the need to commit the changes after each insert into the Oracle database, which Access doesn't do.
Because Access will not allow multiple SQL statements in one query, I ended up having to make several queries to make this work. Here are the steps I followed:
Create a temp table with the columns for the records to be inserted,
an auto-incremented ID column, and a column for the string ID (in
this example above, it would be the INFORMATION_ITEM_ID)
Create an append query to insert into the temp table the records ultimately going into Oracle
Create an update query to calculate the string id using the following
formula:
CStr([TMP_TABLE]![ID]+DMax("Val([INFORMATION_ITEM_ID])","[INFORMATION_ITEM]"))
Use the resulting temp table to update the Oracle table
Because I'm automating this, I've also created a query to drop the temp table prior to recreating it so that the new import starts fresh.
Note: if you only need to insert the calculated ID into one table, you can use just a regular query rather than update the table. In my case, the INFORMATION_ITEM_ID was being inserted into 2 tables, and if I didn't insert it into a temp table so that it was constant, once I inserted it into the INFORMATION_ITEM table, the DMAX value would change.
It takes 5 queries, but at least it works. It's sometimes frustrating the things you have to do to get around Access's quirks.
Anyone have a more elegant way of doing this? Assume that you have to do this in Access (which is my situation).

Generate gap free numbers with database trigger

Together with my team, I am working on a functionality to generate invoice numbers. The requirements says that:
there should be no gaps between invoice numbers
the numbers should start from 0 every year (the together with the year we will have a unique key)
the invoice numbers should grow accordinlgy to the time of the creation of the invoices
We are using php and postgres. We tought to implement this in the following way:
each time a new invoice is persisted on the database we use a BEFORE INSERT trigger
the trigger executes a function that retrieves a new value from a postgres sequence and writes it on the invoice as its number
Considering that multiple invoices could be created during the same transaction, my question is: is this a sufficiently safe approach? What are its flaws? How would you suggest to improve it?
Introduction
I believe the most crucial point here is:
there should be no gaps between invoice numbers
In this case you cannot use a squence and an auto-increment field (as others propose in the comments). Auto-increment field use sequence under the hood and nextval(regclass) function increments sequence's counter no matter if transaction succeeded or failed (you point that out by yourself).
Update:
What I mean is you shouldn't use sequences at all, especially solution proposed by you doesn't eliminates gap possibility. Your trigger gets new sequence value but INSERT could still failed.
Sequences works this way because they mainly meant to be used for PRIMARY KEYs and OIDs values generation where uniqueness and non-blocking mechanism is ultimate goal and gaps between values are really no big deal.
In your case however the priorities may be different, but there are couple things to consider.
Simple solution
First possible solution to your problem could be returning new number as maximum value of currently existing ones. It can be done in your trigger:
NEW.invoice_number =
(SELECT foo.invoice_number
FROM invoices foo
WHERE foo._year = NEW._year
ORDER BY foo.invoice_number DESC NULLS LAST LIMIT 1
); /*query 1*/
This query could use your composite UNIQUE INDEX if it was created with "proper" syntax and columns order which would be the "year" column in the first place ex.:
CREATE UNIQUE INDEX invoice_number_unique
ON invoices (_year, invoice_number DESC NULLS LAST);
In PostgreSQL UNIQUE CONSTRAINTs are implemented simply as UNIQUE INDEXes so most of the times there no difference which command you will use. However using that particular syntax presented above, makes possible to define order on that index. It's really nice trick which makes /*query 1*/ quicker than simple SELECT max(invoice_number) FROM invoices WHERE _year = NEW.year if the invoice table gets bigger.
This is simple solution but has one big drawback. There is possibility of race condition when two transactions try to insert invoice at the same time. Both could acquire the same max value and the UNIQUE CONSTRAINT will prevent the second one from committing. Despite that it could be sufficient in some small system with special insert policy.
Better solution
You may create table
CREATE TABLE invoice_numbers(
_year INTEGER NOT NULL PRIMARY KEY,
next_number_within_year INTEGER
);
to store next possible number for certain year. Then, in AFTER INSERT trigger you could:
Lock invoice_numbers that no other transaction could even read the number LOCK TABLE invoice_numbers IN ACCESS EXCLUSIVE;
Get new invoice number new_invoice_number = (SELECT foo.next_number_within_year FROM invoice_numbers foo where foo._year = NEW.year);
Update number value of new added invoice row
Increment UPDATE invoice_numbers SET next_number_within_year = next_number_within_year + 1 WHERE _year = NEW._year;
Because table lock is hold by the transaction to its commit, this probably should be the last trigger fired (read more about trigger execution order here)
Update:
Instead of locking whole table with LOCK command check link provided by Craig Ringer
The drawback in this case is INSERT operation performance drop down --- only one transaction at the time can perform insert.

Reset Auto Increment ID using web browser [duplicate]

I have a 100 million rows, and it's getting too big.
I see a lot of gaps. (since I delete, add, delete, add.)
I want to fill these gaps with auto-increment.
If I do reset it..is there any harM?
If I do this, will it fill the gaps?:
mysql> ALTER TABLE tbl AUTO_INCREMENT = 1;
Potentially very dangerous, because you can get a number again that is already in use.
What you propose is resetting the sequence to 1 again. It will just produce 1,2,3,4,5,6,7,.. and so on, regardless of these numbers being in a gap or not.
Update: According to Martin's answer, because of the dangers involved, MySQL will not even let you do that. It will reset the counter to at least the current value + 1.
Think again what real problem the existence of gaps causes. Usually it is only an aesthetic issue.
If the number gets too big, switch to a larger data type (bigint should be plenty).
FWIW... According to the MySQL docs applying
ALTER TABLE tbl AUTO_INCREMENT = 1
where tbl contains existing data should have no effect:
To change the value of the
AUTO_INCREMENT counter to be used for
new rows, do this:
ALTER TABLE t2 AUTO_INCREMENT = value;
You cannot reset the counter to a
value less than or equal to any that
have already been used. For MyISAM, if
the value is less than or equal to the
maximum value currently in the
AUTO_INCREMENT column, the value is
reset to the current maximum plus one.
For InnoDB, if the value is less than
the current maximum value in the
column, no error occurs and the
current sequence value is not changed.
I ran a small test that confirmed this for a MyISAM table.
So the answers to you questions are: no harm, and no it won't fill the gaps. As other responders have said: a change of data type looks like the least painful choice.
Chances are you wouldn't gain anything from doing this, and you could easily screw up your application by overwriting rows, since you're going to reset the count for the IDs. (In other words, the next time you insert a row, it'll overwrite the row with ID 1, and then 2, etc.) What will you gain from filling the gaps? If the number gets too big, just change it to a larger number (such as BIGINT).
Edit: I stand corrected. It won't do anything at all, which supports my point that you should just change the type of the column to a larger integer type. The maximum possible value for a BIGINT is 2^64, which is over 18 quintillion. If you only have 100 million rows at the moment, that should be plenty for the foreseeable future.
I agree with musicfreak... The maximum for an integer (int(10)) is 4,294,967,295 (unsigned ofcoarse). If you need to go even higher, switching to BIGINT brings you up to 18,446,744,073,709,551,615.
Since you can't change the next auto-increment value, you have other options. The datatype switch could be done, but it seems a little unsettling to me since you don't actually have that many rows. You'd have to make sure your code can handle IDs that large, which may or may not be tough for you.
Are you able to do much downtime? If you are, there are two options I can think of:
Dump/reload the data. You can do this so it won't keep the ID numbers. For example you could use a SELECT ... INTO to copy the data, sans-IDs, to a new table with identical DDL. Then you drop the old table and rename the new table to the old name. Depending on how much data there is, this could take a noticeable about of time (and temporary disk space).
You could make a little program to issue UPDATE statements to change the IDs. If you let that run slowly, it would "defragment" your IDs over time. Then you could temporarily stop the inserts (just a minute or two), update the last IDs, then restart it. After updating the last IDs you can change the AUTO_INCREMENT value to be the next number and your hole will be gone. This shouldn't cause any real downtime (at least on InnoDB), but it could take quite a while depending on how aggressive your program is.
Of course, both of these ignore referential integrity. I'm assuming that's not a problem (log statements that aren't used as foreign keys, or some such).
Does it really matter if there are gaps?
If you really want to go back and fill them, you can always turn off auto increment, and manually scan for the next available id every time you want to insert a row -- remembering to lock the table to avoid race conditions, of course. But it's a lot of work to do for not much gain.
Do you really need a surrogate key anyway? Depending on the data (you haven't mentioned a schema) you can probably find a natural key.

Avoiding auto-increment ID collisions when moving data between MySQL servers

So the situation is that I am going to have two or more "insert" machines where my web application just inserts data that we want to log into the machines (they are all behind a load balancer). Every couple hours, one by one the machines will be disconnected from the load balancer and upload their information into the "master" database machine should have a relatively up to date version of all the data we are collecting.
Originally I was going to use mysqldump, but found that you cannot specify the command to not grab the auto_increment id column I have (which would lead to collisions on primary key). I saw another post recommending using a temporary table to put the data in and then drop the column, but the "insert" machines have very low specs, and the amount of data could be pretty significant on the order of 50,000 rows. Other than just programatically just taking x rows at a time and inserting them into the remote "master" database, is there an easier way to do this? Currently I have php installed on the "insert" machines.
Thank you for your input.
Wouldn't you want the master database record to have the same primary key for each record as the slave database? If not, that could lead to problems where a query will produce different results based on which machine it's on.
If you want an arbitrary primary key that will avoid collisions, consider removing the auto-increment ID and constructing an ID that's guaranteed to be unique for every record on each server. For example, you could concatenate the unix time (with microseconds) with an identifier that's different for each server. A slightly lazier solution would be to concatenate time + a random 10-digit number or something. PHP's uniqid() function does something like this automatically.
If you don't intend to ever use the ID, then just remove it from your tables. There's no rule saying that every table has to have a primary key. If you don't use it, but you want to encode information about when each record was inserted, add a timestamp column instead (and don't make it a key).

Does MySQL (MyISAM) fill table holes in a multirow insert?

I'm working on a project for which I need to frequently insert ~500 or so records at a remote location. I will be doing this in a single INSERT to minimize network traffic. I do, however, need to know the exact id field (AUTO_INCREMENT) values.
My web searches seem to indicate I could probably use the last_insert_id and calculate all the id values from there. However, this wouldn't work if the rows get ids that are all over the place.
Could anyone please clarify what would or should happen, and if the mathematical solution is safe?
A multirow insert is an atomic operation in MySQL (both MyISAM and InnoDB). Since the table will be locked for writing during this operations, no other rows will be inserted/updated during it's execution.
This means IDs will in fact be consecutive (unless auto-increment-increment option is set to something different than 1
Auto increment does exactly that, it auto-increments - i.e. each new row next the numerically next ID. MySQL does not re-use IDs of rows that were deleted.
Your solution is safe because write operations aquire a table lock, so no other inserts can happen while your operation completes - so you will get n contiguous auto-increment values for n inserted rows.

Resources