Connected To XEPDB1 From SQL Developer [duplicate] - oracle-sqldeveloper

I am using ORACLE database in a windows environment and running a JSP/servlet web application in tomcat. After I do some operations with the application it gives me the following error.
ORA-12518, TNS: listener could not hand off client connection
can any one help me to identify the reason for this problem and propose me a solution?

The solution to this question is to increase the number of processes :
1. Open command prompt
2. sqlplus / as sysdba; //login sysdba user
3. startup force;
4. show parameter processes; // This shows 150(some default) processes allocated, then increase the count to 800
5. alter system set processes=800 scope=spfile;
As Tried and tested.

I ran across the same problem, in my case it was a new install of the Oracle client on a new desktop that was giving the error, other clients were working so I knew it wouldn't be a fix to the database configuration. tnsping worked properly but sqlplus failed with the ora-12518 listener error.
I had the tnsnames.ora entry with a SID instead of a service_name, then once I fixed that, still the same error and found I had the wrong service_name as well. Once I fixed that, the error went away.

In my case I found that it is because I haven't closed the database connections properly in my application. Too many connections are open and Oracle can not make more connections. This is a resource limitation. Later when I check with oracle forum I could see some reasons that have mentioned there about this problem. Some of them are.
In most cases this happens due to a network problem.
Your server is probably running out of memory and need to swap memory to disk.One cause can be an Oracle process consuming too much memory.
if it is the second one, please verify large_pool_size or check dispatcher were enough for all connection.
You can refer bellow link for further details.

If from one day to another the issue shows for no apparent reasons, add these following lines at the bottom of the listner.ora file. If your oracle_home environment variable is set like this:
(ORACLE_HOME = C:\oracle11\app\oracle\product\11.2.0\server)
The lines to add are:
ADR_BASE_LISTENER = C:\oracle11\app\oracle\

I had the same problem when executing queries in my application. I'm using Oracle client with Ruby on Rails.
The problem started when I accidentally started several connections with the DB and didn't close them.
When I fixed this, everything started to work fine again.
Hope this helps another one with the same problem.

I experienced the same error after upgrading to Windows 10. I solved it by starting services for Oracle which are stopped.
Start all the services as shown in the following image:

I had the same issue. After restarting all Oracle services it worked again.


Item Not Supported: Error not Going Away

I am using Zabbix 3.2 for over 100 VMs (Windows, Linux, Mac) and I added a script to all of the Windows VMs. The script is local to every VM and agentd.conf has:
It also has a few other UserParameters, although that is not a part of this issue.
When I go to Items, the red "i" is present, status is "Not Supported", and hovering over the red "i" says:
Received value [No more connections can be made to this remote computer at this time because there are already as many connections as the computer can accept] is not suitable for value type [Numeric (unsigned)] and data type [Decimal].
I find this very weird as it is local to watch VM and not using RDP. I was trying to use a share folder and have the script in 1 location. That obviously did not work which is why I am doing it locally.
The log says "old_random_var is not supported". This is another parameter that is working on zabbix, but is giving this log. Once again, this old_var is completely unrelated to var.
Using zabbix_get says that item is unsupported.
Any advice would be greatly appreciated.
E: Interesting addition, from all the nodes, it works in about 20 random ones, not the others. There is NOTHING unique about these nodes. Completely random.
Userparameter is connecting to Windows FTP. Connection limit must be increased on the Windows side.

How can I set maxIdleTimeMS by command line or mongo-shell?

I have deployed my PHP application (Just out of curiosity, we're using Laravel) and we're using MongoDB in the persistence layer. However , I've noticed a high connection number in connection pool.
It strikes me that connections remain opened for many minutes before it be closed.
The maxIdleTimeMS parameter seems like what I looking for (even though I'm not sure...).
That said, I would like to know how could I to set the maxIdleTimeMS parameter through MongoDB Shell or command line (or even via configuration file). Is it possible?
Thanks in advance!
Please, see the important note on below comment from #jmikola

packet_write_wait: Connection to Broken pipe

What does it mean when the terminal throw this error and how to solve it?
packet_write_wait: Connection to Broken pipe
It was just happen today. After it work normally for year.
My terminal keep disconnect at a certain time. I had already search on google but most of it is about "Write failed: Broken pipe."
Which I already solved that for years. I just found this new annoyed problems today
I experienced this problem as well and spent a few days trying to bisect it.
Like specified, playing with SSH KeepAlive parameters (ClientAliveInterval, ClientAliveCountMax, ServerAliveInterval and ServerAliveCountMax) or kernel TCP parameters (TCPKeepAlive on/off) does not solve the problem.
After playing with USB to Ethernet drivers and tcpdump, I realized the issue was due to the kernel 4.8 I was using. I switched the source (sending side) to 4.4 LTS and the problem disappeared (rsync via ssh and scp were working nicely again). The destination side can remain on 4.8 if you want, in my use case this was working (tested).
On the technical side, we can narrow a little bit the issue thanks to the wireshark dump below I made. We can see the TCP channel of the SSHv2 protocol is being reset (RST flag of TCP set to 1) causing the connection to abort. I don't know the cause of that RST yet. I need to make some bisection from 4.8.1 to 4.8.11 for that.
I'm not saying your problem is specifically due to the kernel 4.8, but wrt. the date you posted your question/message, there are high chances you are currently using a kernel more recent than 4.4.
If that is an ssh connection, then you might want to make sure you send a keepalive message to the server.
Connect through another wifi.
I don't know why or how it works, but it does.
The original poster sthapaun already mentioned this solution in a comment, but I want to add that the solution works for me, too.

Fitnesse Slim does not executing while changing the port

I have executed my fitnesse slim in port number 8080.after closing the browser and re-run my slim table but it shows an error, "Testing was interupted and results are incomplete. Assertions: 0 right, 0 wrong, 0 ignored, 0 exceptions " like this.
Help me out?
The SlimServer opens and listens to a server socket. It gets the port number from FitNesse via the command line. The default is 8085 and it cycles through the next 10 ports to avoid collisions. If 8085 is not convenient for you, you can set the SLIM_PORT variable to any port you like. This variable can be defined on a page by page basis with !define, or it can be specified with -DSLIM_PORT=xxxx on the java command line, or it can be an environment variable.
Is it possible an other process is running in this range? We ran into a similar problem when we put our mock service on port 8085. One out of 10 runs failed and the the exception was not very discriptive. We assume for us the problem was caused by the above, we still testing if it works. Our fitnesse port is in a completely different range btw, 9090
When you see "Testing was interupted and results are incomplete. Assertions: 0 right, 0 wrong, 0 ignored, 0 exceptions " there are a couple of things to look for:
Do you have any code that creates an object in a static class that doesn't get cleaned up. For example, a WebDriver BrowserDriver instance that you never called close()/quit() on.
Is there something else left running when your test is closed and the connection is still in use and preventing a connection?
Do you see any stack traces in the output page (the page you can get to after the test is complete that lists the Test Completed OK, or Test Completed with Errors.
Do you see any stack traces in the command line you ran FitNesse from?
Any of these things can point you in the direction of what is causing FitNesse to fail to complete running the tests (some are causes and some are diagnostics).
Also, are you using the lastet release? Some improvements around this behavior were added to prevent a System.exit() from being called. No guarantee this fixes it, but maybe.
Finally, when you say you closed the browser, if you are referring to the window you ran the test from, then you need to understand that FitNesse has no idea that the browser window that launched a test is still there or not. It keeps running the test and doesn't care that there isn't a client waiting for the result.
maybe you should have a look at Xebium. I am not affiliated at all with Xebia but I use it daily for testing functionality within browsers and works very well
also could you clarify the question a little bit more? What does the test case look like? How is your setup etc

STOP: c000021a {Fatal System Error} The initial session process or system process terminated unexpectedly

I'm encountering such an error after expanding disk (done by Hyper-V) space on virtual machine.
STOP: c000021a {Fatal System Error} The initial session process or system process terminated unexpectedly with a status of (0x00000000) (0xc000012d 0x001003f0).
The virtual server there is Windows Server 2008 R2 Enterprise Edition, which is also Domain Controller, now my whole environment is down :/
I've tried to repair Windows - but there is no restore point, and using command line, I've also tried the sfc /SCANNOW /OFFBOOTDIR /OFFWINDIR, but got error "Windows Resource
Protection could not perform the requested operation"
I initially responded to this question to ask if Christof ever found a solution. That's not allowed, so my post was deleted.
I'm back to share that I solved the above problem for myself using a mix-and-match set of backed-up registry files. I believe the only reason this worked for me is that there had been ZERO changes to the server between the different times the registry files were backed up. Most of the registry files I used in the recovery were from c:\windows\system32\config\system\regbak, but the SOFTWARE file had a timestamps too close to the time of my initial failure, so I used one that I had created in \windows\tmp when I initially began this recovery process. I followed a guide which apparently was deleted but you can find references by searching on kb307545, Also make sure you have a backup of the COMPONENTS hive/file.