Web Api Returning Json - [System.NotSupportedException] Specified method is not supported. (Sybase Ase) - sybase-ase

I'm using Web api with Entity Framework 4.2 and the Sybase Ase connector.
This was working without issues returning JSon, until I tried to add a new table.
return db.car
.Include("tires")
.Include("tires.hub_caps")
.Include("tires.hub_caps.colors")
.Include("tires.hub_caps.sizes")
.Include("tires.hub_caps.sizes.units")
.Where(c => c.tires == 13);
The above works without issues if the following line is removed:
.Include("tires.hub_caps.colors")
However, when that line is included, I am given the error:
""An error occurred while preparing the command definition. See the inner exception for details."
The inner exception reads:
"InnerException = {"Specified method is not supported."}"
"source = Sybase.AdoNet4.AseClient"
The following also results in an error:
List<car> cars = db.car.AsNoTracking()
.Include("tires")
.Include("tires.hub_caps")
.Include("tires.hub_caps.colors")
.Include("tires.hub_caps.sizes")
.Include("tires.hub_caps.sizes.units")
.Where(c => c.tires == 13).ToList();
The error is as follows:
An exception of type 'System.Data.EntityCommandCompilationException' occurred in System.Data.Entity.dll but was not handled in user code
Additional information: An error occurred while preparing the command definition. See the inner exception for details.
Inner exception: "Specified method is not supported."
This points to a fault with with the Sybase Ase Data Connector.
I am using data annotations on all tables to control which fields are returned. On the colors table, I have tried the following annotations to limit the properties returned just the key:
[JsonIgnore]
[IgnoreDataMember]
Any ideas what might be causing this issue?
Alternatively, if I keep colors in and remove,
.Include("tires.hub_caps.sizes")
.Include("tires.hub_caps.sizes.units")
then this works also. It seems that the Sybase Ase connector does not support cases when an include statement forks from one object in two directions. Is there a way round this? The same issue occurs with Sybase Ase and the progress data connector.
The issue does not occur in a standard ASP.net MVC controller class - the problem is with serializing two one to many relationships on a single table to JSON.
This issue still occurs if lazy loading is turned on.
It seems to me that this is a bug with Sybase ASE, that none of the connectors are able to solve.

Related

Getting mapping error. After dragging table with xml fields into dbml file and then compiling

"Error 1 DBML1005: Mapping between DbType 'Xml' and Type 'System.Xml.Linq.XElement' in Column 'XML_LAYOUT' of Type 'QUEST_BLOCK' is not supported."
The above is the error am getting. What am doing is dragging a table with xml fields as columns from server explorer into a dbml file. After that when i compile i am getting the above error. Now after that i changed server datatype to blank. Now the program compiles successfully. But at runtime if i query the table directly using WCF in silverlight the function is showing error. After a debug i found that the select statement on the table is returning the rows in the funtiion, however the error is produced in the reference file in the following function.
Public Function EndGetQuestionListRecord1(ByVal result As System.IAsyncResult) As ServiceReference1.QUEST_BLOCK Implements ServiceReference1.Medex.EndGetQuestionListRecord1
Dim _args((0) - 1) As Object
Dim _result As ServiceReference1.QUEST_BLOCK = CType(MyBase.EndInvoke("GetQuestionListRecord1", _args, result),ServiceReference1.QUEST_BLOCK)
Return _result
End Function
Hope someone around here could resolve this error...
rideonscreen, recently I started getting the same type of error. In my case I get it dragging a stored procedure with a XML input parameter.
I wonder whether you managed to resolve the issue and how.
I googled and found some articles:
http://dev.techmachi.com/?p=319
http://www.west-wind.com/Weblog/posts/505990.aspx
http://www.jonathanjungman.com/blog/post/Visual-Studio-Build-failed-due-to-validation-errors-in-dbml-file.aspx
"devenv /resetskippkgs" helps, but next day the issue appears again.
What is also interesting that I do not touch the LINQ2SQL model (dbml file) at all. The code there is the same for a long time. The issues is definitely exclusively related to Visual Studio.
P.S. I am thinking to migrate to EF.

Spring integration - ExpressionEvaluatingRequestHandlerAdvice - How to fix 'no dispatcher available for the channel'

This is a follow up to this question.
Please refer to attached system diagram and code.
System diagram here
Code sample here
In the attached code, I have highlighted what is relevant in blue.
QUESTIONS:
QUESTION 1:
At step 4 in the code, I have created new advice bean using
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
I want to define it as an autowired bean, but am getting an error.
How do I fix it?
#Autowired
ExpressionEvaluatingRequestHandlerAdvice advice;
gives error: Error_Autowiring_Advice_Bean.
QUESTION 2:
advice.setOnSuccessExpressionString("'##Done publishing storing status into DB'");
advice.setOnFailureExpressionString("'##Error while storing status into DB'");
I do not see this text anywhere in the logs, but removing these 2 statements causes the program to not work.
QUESTION 3:
The failureChannel/error flow works ok. But the success channel gives me the error:
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=AdviceMessage [payload=##Done publishing storing Instruction status into DB, headers={id=abcd-1234, timestamp=1527868236889}, inputMessage=GenericMessage [payload=com.aa.bb.ccc.schema.SampleSchema#1b36eb2e, headers={file_originalFile=<path to file>.json, id=abcd-1234, file_name=<file name>.json, timestamp=1527868236342}]]
QUESTION 4:
In 5& 6, I have created around 10 IntegrationFlows.from() in the whole program. Will this affect the performance in anyway?
[NOTE ON 3 in the diagram: implClassStoretoDB has jdbcTemplate Query to Insert into DB table.
It returns 1 if DB insert is successful, and throws exception if there is an error such as 'Duplicate Primary Key exception'.]

BigQuery Backend error when reading and writing tables in Dataflow

I get this error only when reading, then writing (to a different table). If I only read from the table, no error occurs. For example, the code below produces no error.
Pipeline p = Pipeline.create(
PipelineOptionsFactory.fromArgs(args).withValidation().create());
PCollection<TableRow> BigQueryTableRow = p
.apply(BigQueryIO.Read.named("ReadTable")
.from("project:dataset.data_table"));
p.run();
But if I do the following, I get a 'BigQuery job Backend error'.
Pipeline p = Pipeline.create(
PipelineOptionsFactory.fromArgs(args).withValidation().create());
PCollection<TableRow> BigQueryTableRow = p
.apply(BigQueryIO.Read.named("ReadTable")
.from("project:dataset.data_table"));
TableSchema tableSchema = new TableSchema().setFields(fields);
BigQueryTableRow.apply(BigQueryIO.Write
.named("Write Members to BigQuery")
.to("project:dataset.data_table_two")
.withSchema(tableSchema)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
p.run();
Some more details on the error
BigQuery job "dataflow_job" in project "project-name"
finished with error(s): errorResult: Backend error.
Job aborted.
I managed to figure out the problem on my own. The backend error message is produced because I have two repeated fields in my table.
If I try to output the entire table using BigQuery's web service it displays more helpful error message.
Error: Cannot output multiple independently repeated fields
at the same time. Found memberships_is_coach and actions_type
It is unfortunate that the 'Backend error' message provides no real insight into the problem. Also, when reading only reading the data and not performing any operations, no error is given which further exacerbates the problem.

How can I detect a connection failure in gorm?

I'm writing a small, simple web app in go using the gorm ORM.
Since the database can fail independently of the web application, I'd like to be able to identify errors that correspond to this case so that I can reconnect to my database without restarting the web application.
Motivating example:
Consider the following code:
var mrs MyRowStruct
db := myDB.Model(MyRowStruct{}).Where("column_name = ?", value).First(&mrs)
return &mrs, db.Error
In the event that db.Error != nil, how can I programmatically determine if the error stems from a database connection problem?
From my reading, I understand that gorm.DB does not represent a connection, so do I even have to worry about reconnecting or re-issuing a call to gorm.Open if a database connection fails?
Are there any common patterns for handling database failures in Go?
Gorm appears to swallow database driver errors and emit only it's own classification of error types (see gorm/errors.go). Connection errors do not currently appear to be reported.
Consider submitting an issue or pull request to expose the database driver error directly.
[Original]
Try inspecting the runtime type of db.Error per the advice in the gorm readme "Error Handling" section.
Assuming it's an error type returned by your database driver you can likely get a specific code that indicates connection errors. For example, if you're using PostgreSQL via the pq library then you might try something like this:
import "github.com/lib/pq"
// ...
if db.Error != nil {
pqerr, ok := err.(*pq.Error)
if ok && pqerr.Code[0:2] == "08" {
// PostgreSQL "Connection Exceptions" are class "08"
// http://www.postgresql.org/docs/9.4/static/errcodes-appendix.html#ERRCODES-TABLE
// Do something for connection errors...
} else {
// Do something else with non-pg error or non-connection error...
}
}

Handling error after aggregation

I am reading some lines from a CSV file, converting them to business objects, aggregating these to batches and passing the resulting aggregates to a bean, which may throw an PersistenceException.
Somehow like this:
from(file:inputdir).split().tokenize("\n").bean(a).aggregate(constant(true), new AbstractListAggregationStrategy(){...}).completionSize(3).bean(b)
I have a onException(Exception.class).handled(true).to("file:failuredir").log(). If an exception occurs on bean(a), everything is handled as expected: wrong lines in inputdir/input.csv are written to failuredir/input.csv.
Now if bean(b) fails, Camel seems to fail reconstructing the original message:
message.org.apache.camel.component.file.GenericFileOperationFailedException: Cannot store file: target/failure/ID-myhostname-34516-1372093690069-0-7
Having tried various attempts to get this working, like using HawtDBAggregationRepository, toggling useOriginalMessage at onException and propagating back the exception in my AggregationStrategy, I am out of ideas.
How can I achieve the same behaviour for bean(b) which can be seen with bean(a)?
The aggregator is a stateful EIP pattern, so when it sends out a message, then its a new Exchange. So the bean(b) cannot get access to the original message that came from the file route.

Resources