HikariCP Acceptable Test Query for Sybase 16 - sybase-ase

The below question and answer is perfect for answering this question for almost any database except for Sybase ASE (SAP ASE):
Efficient SQL test query or validation query that will work across all (or most) databases
What is a suitable setting for ASE ? I'm using ASE 16 with driver:
spring.datasource.driver-class-name=com.sybase.jdbc4.jdbc.SybDriver
pom:
<dependency>
<groupId>com.sybase</groupId>
<artifactId>jconn4</artifactId>
<version>16</version>
</dependency>
From the error below it appears to be expecting a stored procedure, however when I try an existing sproc (as appose to "SELECT 1") it doesn't work either
HikariPool-1 - Failed to execute connection test query (Stored procedure '"SELECT 1"' not found. Specify owner.objectname or use sp_help to check whether the object exists (sp_help may produce lots of output).
app properties:
spring.datasource.hikari.connection-test-query="SELECT 1"

In Sybase ASE select can be without where or from clause
A simple select statement contains only the select clause; the from clause is almost always included, but is necessary only in select statements that retrieve data from tables. All other clauses, including the where clause, are optional.
So you can just use Select 1 removing the double quotes as in example
connection-test-query: SELECT 1

Related

Joint two tables from different postgreSQL database in Codeigniter 3 [duplicate]

I'm going to guess that the answer is "no" based on the below error message (and this Google result), but is there anyway to perform a cross-database query using PostgreSQL?
databaseA=# select * from databaseB.public.someTableName;
ERROR: cross-database references are not implemented:
"databaseB.public.someTableName"
I'm working with some data that is partitioned across two databases although data is really shared between the two (userid columns in one database come from the users table in the other database). I have no idea why these are two separate databases instead of schema, but c'est la vie...
Note: As the original asker implied, if you are setting up two databases on the same machine you probably want to make two schemas instead - in that case you don't need anything special to query across them.
Update as of 9.3
You can now use the new postgres_fdw (foreign data wrapper) to connect to tables in any Postgres database - local or remote.
Note that there are foreign data wrappers for other popular data sources. At this time, only postgres_fdw and file_fdw are part of the official Postgres distribution.
Original answer for pre-9.3
This functionality isn't part of the default PostgreSQL install, but you can add it in. It's called dblink.
I've never used it, but it is maintained and distributed with the rest of PostgreSQL. If you're using the version of PostgreSQL that came with your Linux distro, you might need to install a package called postgresql-contrib.
I have run into this before an came to the same conclusion about cross database queries as you. What I ended up doing was using schemas to divide the table space that way I could keep the tables grouped but still query them all.
dblink() -- executes a query in a remote database
dblink executes a query (usually a SELECT, but it can be any SQL
statement that returns rows) in a remote database.
When two text arguments are given, the first one is first looked up as
a persistent connection's name; if found, the command is executed on
that connection. If not found, the first argument is treated as a
connection info string as for dblink_connect, and the indicated
connection is made just for the duration of this command.
one of the good example:
SELECT *
FROM table1 tb1
LEFT JOIN (
SELECT *
FROM dblink('dbname=db2','SELECT id, code FROM table2')
AS tb2(id int, code text);
) AS tb2 ON tb2.column = tb1.column;
Note: I am giving this information for future reference. Refrence
Just to add a bit more information.
There is no way to query a database other than the current one. Because PostgreSQL loads database-specific system catalogs, it is uncertain how a cross-database query should even behave.
contrib/dblink allows cross-database queries using function calls. Of course, a client can also make simultaneous connections to different databases and merge the results on the client side.
PostgreSQL FAQ
Yes, you can by using DBlink (postgresql only) and DBI-Link (allows foreign cross database queriers) and TDS_LInk which allows queries to be run against MS SQL server.
I have used DB-Link and TDS-link before with great success.
If performance is important and most queries are read-only, I would suggest to replicate data over to another database. While this seems like unneeded duplication of data, it might help if indexes are required.
This can be done with simple on insert triggers which in turn call dblink to update another copy. There are also full-blown replication options (like Slony) but that's off-topic.
In case someone needs a more involved example on how to do cross-database queries, here's an example that cleans up the databasechangeloglock table on every database that has it:
CREATE EXTENSION IF NOT EXISTS dblink;
DO
$$
DECLARE database_name TEXT;
DECLARE conn_template TEXT;
DECLARE conn_string TEXT;
DECLARE table_exists Boolean;
BEGIN
conn_template = 'user=myuser password=mypass dbname=';
FOR database_name IN
SELECT datname FROM pg_database
WHERE datistemplate = false
LOOP
conn_string = conn_template || database_name;
table_exists = (select table_exists_ from dblink(conn_string, '(select Count(*) > 0 from information_schema.tables where table_name = ''databasechangeloglock'')') as (table_exists_ Boolean));
IF table_exists THEN
perform dblink_exec(conn_string, 'delete from databasechangeloglock');
END IF;
END LOOP;
END
$$

Can I use a query parameter in a table name?

I want to do something along the lines of:
SELECT some_things
FROM `myproject.mydataset.mytable_#suffix`
But this doesn't work because the parameter isn't expanded inside the table name.
This does work, using wildcard tables:
SELECT some_things
FROM `myproject.mydataset.mytable_*`
WHERE _TABLE_SUFFIX = #suffix
However, it has some problems:
If I mistype the parameter, this query silently returns zero rows, rather than yelling at me loudly.
Query caching stops working when querying with a wildcard.
If other tables exist with the mytable_ prefix, they must have the same schema, even if they don't match the suffix. Otherwise, weird stuff happens. It seems like BigQuery either computes the union of all columns, or takes the schema of an arbitrary table; it's not documented and I didn't look at it in detail.
Is there a better way to query a single table whose name depends on a query parameter?
In order to answer your stated problems:
Table scanning happens in FROM clause, in WHERE clause happens filtering [1] thus if WHERE condition is not match an empty result would be returned.
"Currently, Cached results are not supported when querying with wildcard" [2].
"BigQuery uses the schema for the most recently created table that matches the wildcard as the schema" [3]. What kind of weird stuff you have faced in your use case? "A wildcard table represents a union of all the tables that match the wildcard expression" [4].
In BigQuery parameterized queries can be run, But table names can not be parameterized [5]. Your wildcard solution seems to be the only way.
You can actually use tables as parameters if you use the Python API, but it's not documented yet. If you pass the tables as parameters through a formatted text string vs. a docstring, your query should work.
SQL example:
sql = "SELECT max(_last_updt) FROM `{0}.{1}.{2}` WHERE _last_updt >= TIMESTAMP(" +
"CURRENT_DATE('-06:00'))".format(project_id, dataset_name, table_name)
SQL in context of Python API:
bigquery_client = bigquery.Client() #setup the client
query_job = bigquery_client.query(sql) #run the query
results = query_job.result() # waits for job to complete
for row in results:
print row

Using Select *, in Oracle (PL/SQL)

I am trying to do this in an SSIS task that is connected to an Oracle table:
Select *, SYSDATE from OracleTable1
And Oracle doesn't like it, saying the 'from keyword not found where expected'. Interestingly this would run fine if connected to a SQL source. Also interestingly if I entered the columns to replace the * it also runs. So what is it about oracle that doesn't allow the '*, sysdate'?
Am I just doing something wrong? I want ALL columns regardless, then a sysdate. Why is that not possible? I just want to avoid listing columns (that could be renamed upstream) and break the job. I'd rather have nulls than a error'd job. To put it in perspective, I would rather the reports that the data feeds have one or two null fields as opposed to nothing in the reports at all.
Select OracleTable1.*, SYSDATE from OracleTable1
It should work when qualified with the table/alias

Union query Access on an Interbase DB

I am executing queries from Access 2010 on an Interbase database via ODBC (Easysoft) ver.7. Everything works fine except when i come to fire a Union query such as this:
SELECT TRIP.TRIPDATE, RESERVATION.BOOKINGREF, RESERVATION.LEADNAME, TRIP.DRIVERID, RESERVATION.STATUS, RESERVATION.DATECANCELLED, TRIP.TRANSPORTTYPEID
FROM TRIP INNER JOIN RESERVATION ON TRIP.TRIPID = RESERVATION.ARRIVALTRIPID
WHERE (((TRIP.TRIPDATE) Between #2/1/2012# And #2/29/2012#) AND ((TRIP.DRIVERID)=2) AND ((RESERVATION.DATECANCELLED) Is Null) AND ((TRIP.TRANSPORTTYPEID)=12))
UNION
SELECT TRIP.TRIPDATE, RESERVATION.BOOKINGREF, RESERVATION.LEADNAME, TRIP.DRIVERID, RESERVATION.STATUS, RESERVATION.DATECANCELLED, TRIP.TRANSPORTTYPEID
FROM TRIP INNER JOIN RESERVATION ON TRIP.TRIPID = RESERVATION.DEPARTURETRIPID
WHERE (((TRIP.TRIPDATE) Between #2/1/2012# And #2/29/2012#) AND ((TRIP.DRIVERID)=2) AND ((RESERVATION.DATECANCELLED) Is Null) AND ((TRIP.TRANSPORTTYPEID)=12));
When I run this query from Access I get
"ODBC --call failed, [Easysoft][Interbase]Dynamic SQL Error, SQL error
code = -104, Token unknown -line1,char 0, ((#-104)"
When running the select queries on their own they work fine but when joined via UNION I get this error.
Any help would be appreciated.
thanks
You don't mention if your query is a passthrough query or if you are using linked ODBC tables in an Access query.
If you are using a normal Access query
When using linked ODBC tables in a normal Access query, the Access data engine will rewrite the queries as necessary to make them compatible with the other database engine.
Sometimes, it can fail though.
Make sure each SELECT query works and returns correct data independently.
Try a simpler UNION query to make sure that the issue comes from the UNION keyword itself.
Try UNION ALL
Try using a pass-through query instead.
If you are using a pass-through query
Pass-through queries are send verbatim to the ODBC engine, and Access just collects the results without rewriting the query itself.
Make sure each SELECT query works as a pass-through query and returns correct data independently.
Make sure that the literal dates are properly formatted for Interbase SQL.
The ones you use are correct for Access SQL, but different databases accept different formats.
Try a simpler UNION query using simple SELECT statements involving 1 or 3 fields only.
Try UNION ALL.
You don't show it in your question, but just in case, if you used an ORDER BY statement, you have to wrap the UNION query.
Try to cast the data types of your fields. It may be that some fields's data are incorrectly interpreted and that the union fails because it assumes that the data retrieved is of different types.
Try using a standard Access query instead.

MySQL to DB2 through ADOdb PHP

I'm trying to port a small PHP application to DB2 from MySQL. The application connects to a MySQL database through ADOdb. I was successful in connection to the DB2 database through ADOdb, but I wasn't so successful in executing the SQL queries. The queries needed to be modified to include quotation (" ") marks around table names to execute. Is their any workaround in ADOdb for this? It's a bit tedious to modify each query (which actually defeats the purpose of using ADOdb in the first place).
Thanks!
In DB2, by default, schema, table and column names are not case sensitive. When you issue the statement:
create table myid.test (
c1 int
c2 int
);
DB2 folds the schema, table and column names into upper case. Therefore, if you look in the system catalog, you'll see that the table is called MYID.TEST and has columns C1 and C2.
DB2 folds all queries into upper case as well (by default). So, when you query this table, the following statements are identical:
select c1, c2 from myid.test
SELECT C1, c2 from MYID.TEST
SELECT c1, C2 from MyID.Test
However, DB2 can use case sensitive names: If you quote the schema/table/column names in the definition, then DB2 will use the exact strings:
create table "MyID"."Test" (
c1 int,
"C2" int
);
In this case, you'll see the mixed case schema/table/column names in the system catalog.
This has the unfortunate (and painful) side effect of REQUIRING that you quote your schema/table/column names in all of your queries, DML and DDL.
Using mixed case names is NOT best practice.
The best solution would be to re-create your tables without the case-sensitive names (i.e. don't put schema/table/column names in quotes.
This will eliminate your need to override everything with ADODB. It's possible that there is some workaround for ADODB, but the pain will still exist for anyone else.
It's a bit tedious to modify each
query (which actually defeats the
purpose of using ADOdb in the first
place).
While it certainly may be tedious, you'll have to do it. JDBC provides the equivalent functionality in Java, and there you have to write SQL that is specific to your particular database. This is just how it works with database development. Unless you use some sort of abstraction layer like Hibernate (a Java ORM) to hide the specifics of the SQL from you, you'll have to tweak it to run on a different database.
Be glad that the only thing you've encountered so far is adding a few quotation marks. People frequently end up having to rewrite most of the query when converting a complex query from one server to another.

Resources