Extrainfo column in Sybase audit table - sybase-ase

i'm using Sybase ASE 15.0 and when reading the extrainfo column from sysaudits_01 there are missing values. The manual says that in case of an update, the previous value and the current value will appear in the column. Tested different scenarios of updates, inserts, deletes but other than the words UPDATE/INSERT/DELETE nothing else appears.
Is there anything that should be turned on or something to do to be able to see the values?
Any help is appreciated.
L.E. Using sp_audit 'cmdtext', 'sa', 'all', 'on' --user level

Using sp_audit 'cmdtext', 'sa', 'all', 'on' --user level

Related

Data truncated for column 'xxxx' at row 1 [closed]

I have just moved a php site to a new server. One of my queries is failing with Data truncated for column 'xxx' ar row 1 message. I checked that this field is float (10,6) type. And the values I provided white updating are not of exactly float(10,6) format and they vary .. like sometimes I put 0 only, or 54.56666 only .. so any idea how do I sort it out??
PS:
On the earlier server, everything was working ok. This new server has different (newer) version of mysql. I don't want to make any changes to the mysql config.
You tried to put in data that was longer than what your column definitions allow. Please provide the queries you used for us to see. In addition googling the error message yielded:
http://forums.mysql.com/read.php?11,132672,132672#msg-132672
http://forums.mysql.com/read.php?11,132672,132693#msg-132693
http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html
The suggested solution there is to modify the server's "strict mode" setting:
When this manual refers to “strict mode,” it means a mode where at least one of STRICT_TRANS_TABLES or STRICT_ALL_TABLES is enabled.
Also, if you have an enum and you insert a value not present in the enum you can see this error.

DataGrip v2018.2: 'Unresolved Reference' Inspection error occurring for aliased column

I am working with Redshift, and have a query where I'm running multiple aggregations/text field parsing operations. I'm aliasing these operations as their own column name, and then reusing that alias later in the query to perform additional operations on the data.
In lieu of copying the aggregation/parse code again into later sections of the query, I'm using the aliased column name in subsequent manipulations; which Redshift appears to accept as valid usage since the query will complete (PostgreSQL does not accept this as valid usage).
As an example:
-- How I've currently written my query.
SUM(widget_cost) AS total_widget_cost
, CASE WHEN total_widget_cost > 100.00 THEN '1' ELSE '0' END as widget_check
vs.
-- How I don't want to have to write my query.
SUM(widget_cost) AS total_widget_cost
, CASE WHEN SUM(widget_cost) > 100.00 THEN '1' ELSE '0' END as widget_check
When DataGrip inspects the first version, it produces an 'Unresolved Reference' error on the alias 'total_widget_cost' when it is reused within the CASE statement. That being said, the query compiles fine and Redshift understands how to interpret the query correctly because it returns accurate results.
Below is a screenshot of what I'm seeing in my actual code:
I understand that you can suppress the warning (alt + enter -> 'Inspection Unresolved Reference' Options -> 'Suppress for Statement') but that blocks valid errors from displaying in the event that I've typed a table name incorrectly, etc.
Any help on what I can do to get DataGrip to 'recognize' these aliases would be greatly appreciated!

WSO2 DAS - High CPU usage with oracleDB 11g

I am configuring DAS 3.1.0 + APIM 2.0.0 with Oracle database 11g (relational database).
When I enable DAS analysis statistics to integrate with the API Manager, almost everything works fine except for the part where the DAS dramatically raises the CPU consumption level of the machine with the database.
I noticed that he always runs this query:
MERGE INTO API_REQ_USER_BROW_SUMMARY dest USING( SELECT :1 api, :2
version, :3 apiPublisher, :4 tenantDomain, :5 total_request_count, :6
year, :7 month, :8 day, :9 requestTime, :10 os, :11 browser FROM dual)
src ON(dest.api=src.api AND dest.version=src.version AND
dest.apiPublisher=src.apiPublisher AND dest.year=src.year AND
dest.month=src.month AND dest.day=src.day AND dest.os=src.os AND
dest.browser=src.browser AND dest.tenantDomain=src.tenantDomain)WHEN NOT
MATCHED THEN INSERT(api, version, apiPublisher, tenantDomain,
total_request_count, year, month, day, requestTime, os, browser)
VALUES(src.api, src.version, src.apiPublisher, src.tenantDomain,
src.total_request_count, src.year, src.month, src.day, src.requestTime,
src.os, src.browser) WHEN MATCHED THEN UPDATE SET
dest.total_request_count=src.total_request_count,
dest.requestTime=src.requestTime
I would like to know if there is a way to optimize so that the cpu of the machine on which the data dase is not so beaten up causing a performance drop.
Please, has anyone had this difficulty yet and could you help me?
What happens in the above query is, the records are inserted into database if there are no records with the same primarykey values in the database, or if there are some records with the same primarykeys then we update the existing records.
The Table "API_REQ_USER_BROW_SUMMARY" has two columns "OS" and "browser" which are a part of the primary keys of that table. It is observed that when the NULL values are inserted into "OS" and "browser" the analytics server and the database hang.
What you can do is, (You might need to shutdown the analytics server and restart the db server before following steps)
Go to {Analytics_server}/repository/deployment/server/carbonapps then open org_wso2_carbon_analytics_apim-1.0.0.car as a zip file.
Go to folder APIM_USER_AGENT_STATS_1.0.0
open APIM_USER_AGENT_STATS.xml
At the end of the script (before tag), you will see a sparkSql query like below.
INSERT INTO TABLE APIUserBrowserData SELECT api,version,apiPublisher,tenantDomain,total_request_count,year,month,day,requestTime,os,browser FROM API_REQUEST_USER_BROWSER_SUMMARY_FINAL;
Replace that line with the following.
INSERT INTO TABLE APIUserBrowserData SELECT api,version,apiPublisher,tenantDomain,total_request_count,year,month,day,requestTime, if(os is null, "UNKNOWN",os), if(browser is null, "UNKNOWN", browser) FROM API_REQUEST_USER_BROWSER_SUMMARY_FINAL;
This will prevent Spark inserting NULL values to "OS" and "browser" columns into table "API_REQ_USER_BROW_SUMMARY".
Please check if the CPU consumption is high after doing the above changes.
Edit: #artCampos, I cannot comment, so I am editing my original answer to reply to your comment. There will not be any side effect. But, Note that, We are replacing the NULL values with string value "UNKOWN". I dont think that will be a problem in this case. You dont need to discard any of the existing data. Please also note that, In any case, if the NULL values are inserted into
DB primaryKeys, it will fail in most of the RDBMS.

Cannot insert new value to BigQuery table after updating with new column using streaming API

I'm seeing some strange behaviour with my bigquery table, I've just created added a new column to a table, it looks good on the interface and getting the schema via the api.
But when adding a value to the new column I get the following error:
{
"insertErrors" : [ {
"errors" : [ {
"message" : "no such field",
"reason" : "invalid"
} ],
"index" : 0
} ],
"kind" : "bigquery#tableDataInsertAllResponse"
}
I'm using the java client and streaming API, the only thing I added is:
tableRow.set("server_timestamp", 0)
Without that line it works correctly :(
Do you see anything wrong with it (the name of the column is server_timestamp, and it is defined as an INTEGER)
Updating this answer since BigQuery's streaming system has seen significant updates since Aug 2014 when this question was originally answered.
BigQuery's streaming system caches the table schema for up to 2 minutes. When you add a field to the schema and then immediately stream new rows to the table, you may encounter this error.
The best way to avoid this error is to delay streaming rows with the new field for 2 minutes after modifying your table.
If that's not possible, you have a few other options:
Use the ignoreUnknownValues option. This flag will tell the insert operation to ignore unknown fields, and accept only those fields that it recognizes. Setting this flag allows you to start streaming records with the new field immediately while avoiding the "no such field" error during the 2 minute window--but note that the new field values will be silently dropped until the cached table schema updates!
Use the skipInvalidRows option. This flag will tell the insert operation to insert as many rows as it can, instead of failing the entire operation when a single invalid row is detected. This option is useful if only some of your data contains the new field, since you can continue inserting rows with the old format, and decide separately how to handle the failed rows (either with ignoreUnknownValues or by waiting for the 2 minute window to pass).
If you must capture all values and cannot wait for 2 minutes, you can create a new table with the updated schema and stream to that table. The downside to this approach is that you need to manage multiple tables generated by this approach. Note that you can query these tables conveniently using TABLE_QUERY, and you can run periodic cleanup queries (or table copies) to consolidate your data into a single table.
Historical note: A previous version of this answer suggested that users stop streaming, move the existing data to another table, re-create the streaming table, and restart streaming. However, due to the complexity of this approach and the shortened window for the schema cache, this approach is no longer recommended by the BigQuery team.
I was running into this error. It turned out that I was building the insert object like i was in "raw" mode but had forgotten to set the flag raw: true. This caused bigQuery to take my insert data and nest it again under a json: {} node.
In otherwords, I was doing this:
table.insert({
insertId: 123,
json: {
col1: '1',
col2: '2',
}
});
when I should have been doing this:
table.insert({
insertId: 123,
json: {
col1: '1',
col2: '2',
}
}, {raw: true});
the node bigquery library didn't realize that it was already in raw mode and was then trying to insert this:
{
insertId: '<generated value>',
json: {
insertId: 123,
json: {
col1: '1',
col2: '2',
}
}
So in my case the errors were referring to the fact that the insert was expecting my schema to have 2 columns in it (insertId and json).

Access SQL, append query breaks when using ODBC/PHP

I'm designing a web interface for my clients database (A .mdb MS Access file). I'm using an ODBC driver to connect to it and the odbc_ functions provided by PHP.
My problem is access's 'append' queries. From what I gather, it's just inserting more rows, but something is breaking the query from executing:
INSERT INTO test ( TITLE, [LEVEL], UNITID, TITLEM, COHORTPLUSOPTIONS )
SELECT \"OPTION ONLY\" AS Expr, Units.LEVEL, UnitOptionNumbers.ID, Units.TITLE,
UnitOptionNumbers.OPTIONCOHORT
FROM UnitOptionNumbers INNER JOIN Units ON UnitOptionNumbers.ID = Units.ID WHERE
(((UnitOptionNumbers.NOAWARD)=Yes));
The most helpful error message I can get is:
[ODBC Microsoft Access Driver] Too few parameters. Expected 1.
Which isn't helpful at all. I'm confident with mySQL, but I just cannot pinpoint the problem here. Please can you help me find the reason the query wont execute, or help me figure out a work around.
Thanks for your time.
I don't have enough reputation to comment but perhaps it could be a problem with the fact that your table "test" has two fields with the same name ("TITLE")
According to Microsoft:
"This error occurs only with Microsoft Access when one of the column names specified in a select statement does not exist in the table being queried."
The solution therefore is to change
SELECT \"OPTION ONLY\" AS Expr
to
SELECT 'OPTION ONLY'
It seems the original code attempted to fill the first field with a default text value I.e "OPTION ONLY". "OPTION ONLY" was being read as a column name it seems.

Resources