mybatis spring batch +sybase: trying to get the database identity value after insertion to assign it to the id field in pojo - sybase-ase

my code looks like the sample code given below.
--table create statement
--pojo class
public class Log
private long identifier;
private String name;
private String description;
private String user;
--insert statement in mapper
<insert id="insertRecord" parameterType="" useGeneratedKeys="true" keyProperty="identifier" keyColumn="uniqueID">
VALUES (#{}, #{log.description}, #{log.user})
issue: when i try to run this code against sybase database, am getting NullPointerException. When i tried to debug it, error came from within SybStatement.class. Sorry am not able to provide entier stacktrace due to constraint in copy/paste at my work station.
I am able to run the same code against H2 database successfully. Records got inserted and "identifier" in Log object is having the identify value same as database rows.
Did you face this issue in sybase?. Please share if anyone is having code for showing the usage of "useGeneratedKeys" mybatis feature in sybase..
I am running this insert statement using MybatisBatchItemWriter.
I tried to use two different sqlsessiontemplate objects for chunk reader & chunk writer and it didn't resolve the issue.
I am using jconn3 sybase jdbc jar, mybatis 3.4.4 and mybatis-spring 1.3.1 jar.
Thanks in advance

In SQL terms, you need to do SELECT ##IDENTITY to pick up the generated value. Thecquestion is if your framework generates such SQL...


Trouble running liquibase with different agent

I need to execute the same db-changelog by ant and then by spring. I hope that ant will run the changelog and when spring run, it will not do anything and just stop normally. Ant run the db-changelog successfully and then spring run but it throws an exception, part of the stack trace :
Reason: liquibase.exception.JDBCException: Error executing SQL CREATE TABLE action (action_id int8 NOT NULL, action_name VARCHAR(255), version_no int8, reason_required BOOLEAN, comment_required BOOLEAN, step_id int8, CONSTRAINT action_pkey PRIMARY KEY (action_id)):
Caused By: Error executing SQL CREATE TABLE action (action_id int8 NOT NULL, action_name VARCHAR(255), version_no int8, reason_required BOOLEAN, comment_required BOOLEAN, step_id int8, CONSTRAINT action_pkey PRIMARY KEY (action_id)):
Caused By: ERROR: relation "action" already exists; nested exception is org.springframework.beans.factory.BeanCreationException....
Any help will much appreciated.
It does sound like it is trying to run the changelog again. Each changeSet in the changeLog is identified by a combination of the id, author, and the changelog path/filename. If you run "select * from databasechangelog" you can see the values used.
Your problem may be that you are referencing the changelog file differently from ant and spring therefore generating different filename values. Usually you will want to include them in the classpath so no matter where and how you run them they have the same path (like "com/example/db.changelog.xml")
I ran into this same problem and was able to fix it by altering the filename column of DATABASECHANGELOG to reference the spring resource path. In my case, I was using a ServletContextResource under the WEB-INF directory:
update DATABASECHANGELOG set FILENAME = 'WEB-INF/path/to/changelog.xml' where FILENAME = 'changelog.xml'

error using postgres UUID primary key with Hibernate

I am developing a greenfield web app that uses Spring boot 1.4.1 - which uses spring 4.3.3 & Hibernate 5.2.3.Final under the hood.
We are using Postgres 9.4 for our database, so I have the postgres-9.4.1212.jar in my class path as well.
All my tables use primary (& foreign) keys of type UUID.
In my entity class itself, I have the following annotations over the id property:
#Type(type = "pg-uuid")
#GeneratedValue(generator = "uuid")
#GenericGenerator(name = "uuid", strategy = "uuid2")
#Column(unique = true, nullable = false, columnDefinition = "uuid")
private UUID id;
When we connect to the database using this url and do a query
spring.datasource.url: jdbc:postgresql://localhost:5432/mydb
We get the following error
Caused by: org.postgresql.util.PSQLException:
ERROR: operator does not exist: uuid = bytea
Hint: No operator matches the given name and argument type(s).
You might need to add explicit type casts.
Position: 139
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(
The error goes away and the query works fine when we change the connection url to the following:
spring.datasource.url: jdbc:postgresql://localhost:5432/mydb?stringtype=unspecified
Note the added suffix: ?stringtype=unspecified
Is this the right thing to do?
Is there a better way to fix the error?
Saw a few stack-overflow posts related to using postgres UUID in hibernate, so it looks like other folks are having the same issue as well. None of the answers seem satisfactory. It is odd that this would not work out of the box.
After perusing Hibernate 5.2.3 documentation and R-ing TFM, I see this snippet which seems like it may be related:
Hibernate User Guide
Quoting from the above link
The default UUID mapping is as binary because it represents more
efficient storage. However many applications prefer the readability of
character storage. To switch the default mapping, simply call
MetadataBuilder.applyBasicType( UUIDCharType.INSTANCE,
UUID.class.getName() )
Also elsewhwere in section 2.3.13 in the above document, it also says this
Maps the UUID using PostgreSQL’s specific UUID data type. The
PostgreSQL JDBC driver chooses to map its UUID type to the OTHER code.
Note that this can cause difficulty as the driver chooses to map many
different data types to OTHER.
I suspect I need to create an #Bean to tell Spring-boot to configure the MetadataBuilder, but am not quite sure how to do this.
Looking for some insights on how to configure hibernate's MetadataBuilder to process UUID as char instead of binary.

BigqueryIO Unable to Write to Date-Partitioned Table

I am following the instructions in the following post to write to a date-partitioned table in BigQuery. I am using a serializable function to map the the window to a partition-location using the $ syntax and I get the following error:
Invalid table ID \"table$19700822\". Table IDs must be alphanumeric (plus underscores) and must be at most 1024 characters long.
Am I missing something here?
Edit adding code:
.to(new SerializableFunction<BoundedWindow, String>() {
public String apply(BoundedWindow window) {
String dayString = DateTimeFormat.forPattern("yyyyMMdd")
.print(((IntervalWindow) window).start());
return "project_id:dataset.table$" + dayString;
Make sure that the table you're trying to access already exists. You can't create a table with "$" in it, and you're using "create if needed", so that your code might end up creating the table in addition to trying to write to it.

JOOQ fails with PostgreSQL Custom Type as an Array: ERROR: malformed record literal

I have the following custom type on Postgres:
CREATE TYPE my_custom_type AS (
field_a VARCHAR,
field_b NUMERIC(10,3)
and the following table:
CUSTOM_COLUMN my_custom_type,
CUSTOM_COLUMN_ARRAY my_custom_type[]
Everything works fine when I use my custom type with JOOQ:
public void testWithoutArray(){
MyTableRecord record = dsl.newRecord(MyTable.MY_TABLE);
record.setCol1("My Col1");
MyCustomType customType = new MyCustomType();
customType.setFieldA("Field A Val");
However, when I try to set some value in the field mapped to a custom type array, I have the following error:
public void testWithArray(){
MyTableRecord record = dsl.newRecord(MyTable.MY_TABLE);
record.setCol1("My Col1");
MyCustomTypeRecord customType = new MyCustomTypeRecord();
customType.setFieldA("Field A Val 1");
MyCustomTypeRecord customType2 = new MyCustomTypeRecord();
customType2.setFieldA("Field A Val 2");
record.setCustomColumnArray(new MyCustomTypeRecord[]{customType, customType2});;
org.jooq.exception.DataAccessException: SQL [insert into "my_table" ("col1", "custom_column_array") values (?, ?::my_custom_type[]) returning "my_table"."col1"]; ERROR: malformed record literal: "my_custom_type"(Field A Val 1, 1)"
Detail: Missing left parenthesis.
at org.jooq.impl.Utils.translate(
at org.jooq.impl.DefaultExecuteContext.sqlException(
at org.jooq.impl.AbstractQuery.execute(
at org.jooq.impl.TableRecordImpl.storeInsert0(
at org.jooq.impl.TableRecordImpl$1.operate(
at org.jooq.impl.RecordDelegate.operate(
at org.jooq.impl.TableRecordImpl.storeInsert(
at org.jooq.impl.UpdatableRecordImpl.store0(
at org.jooq.impl.UpdatableRecordImpl.access$000(
at org.jooq.impl.UpdatableRecordImpl$1.operate(
at org.jooq.impl.RecordDelegate.operate(
The query generated by JOOQ debugg is the following:
DEBUG [main] - Executing query : insert into "my_table" ("col1", "custom_column_array") values (?, ?::my_custom_type[]) returning "my_table"."col1"
DEBUG [main] - -> with bind values : insert into "my_table" ("col1", "custom_column_array") values ('My Col1', array[[UDT], [UDT]]) returning "my_table"."col1"
Am I missing some configuration or is it a bug?
As stated in the relevant issue (, this is a missing piece of support for this kind of PostgreSQL functionality. The answer given in the issue so far is:
Unfortunately, this is an area where we have to work around a couple of limitations of the PostgreSQL JDBC driver, which doesn't implement SQLData and other API (see also pgjdbc/pgjdbc#63).
Currently, jOOQ binds arrays and UDTs as strings. It seems that this particular combination is not yet supported. You will probably be able to work around this limitation by implementing your own custom data type Binding:

Circular Dependency Error with SQLAlchemy using autoload for table creation

I am attempting to use the script found here.
I am connecting to an MS SQL database and attempting to copy it into a MySQL database. When the script gets to this line:
I get the error of:
I reasearched this error and found that it occurs when using the autoload=True when creating a table. The solution though doesn't help me. The solution for this is to not use autoload=True and to make use of the use_alter=True flag when defining the foreign key, but I'm not defining the tables manually, so I can't set that flag.
Any help on how to correct this issue, or on a better way to accomplish what I am trying to do would be greatly appreciated. Thank you.
you can iterate through all constraints and set use_alter on them:
from sqlalchemy.schema import ForeignKeyConstraint
for table in metadata.tables.values():
for constraint in table.constraints:
if isinstance(constraint, ForeignKeyConstraint):
constraint.use_alter = True
Or similarly, iterate through them and specify them as AddConstraint operations, bound to after the whole metadata creates:
from sqlalchemy import event
from sqlalchemy.schema import AddConstraint
for table in metadata.tables.values():
for constraint in table.constraints:
see Controlling DDL Sequences