tackle high syslog issue in sybase - sybase-ase

What I understand from the syslog is that- necessary info is logged into a file so that rollback can be performed based on the info stored into syslog file.
From last few months I am facing high syslog usage in my sybase database. Checking on the database activity at the time of high syslog usage, not able to find any query that can cause this. I have "trunc log on chkpt" dboption also enabled.
sp_helpthreshold
segment name free pages last chance? threshold procedure
--------------- ------------- --------------- ----------------------
logsegment 109296 1 sp_thresholdaction
Could anybody please point out if any other setting is required to keep syslog usage under control?
result of sp_helpdb DB01
name db_size owner dbid created durability status
------- ------------- -------- ------- ------------ ------------- ------------------------------------------------------------------------
tlew04 33000.0 MB sa 13 Apr 17, 2013 full select into/bulkcopy/pllsort, trunc log on chkpt, abort tran on log full
device_fragments size usage created free kbytes
------------------- ------------- -------------------- ------------------- ----------------
dev29 1000.0 MB data only May 26 2013 5:17AM 928
dev32 250.0 MB log only May 26 2013 5:17AM not applicable
dev29 250.0 MB data only May 26 2013 5:17AM 0
dev32 7.0 MB log only May 26 2013 5:17AM not applicable
dev38 13.0 MB log only May 26 2013 5:17AM not applicable
dev29 1450.0 MB data only May 26 2013 5:17AM 240
dev38 230.0 MB log only May 26 2013 5:17AM not applicable
dev29 300.0 MB data only May 26 2013 5:17AM 416
dev38 200.0 MB log only May 26 2013 5:17AM not applicable
dev29 500.0 MB data only May 26 2013 5:17AM 1230
dev38 300.0 MB log only May 26 2013 5:17AM not applicable
dev29 500.0 MB data only May 26 2013 5:17AM 876
dev38 100.0 MB log only May 26 2013 5:17AM not applicable
dev29 200.0 MB data only May 26 2013 5:17AM 0
dev38 200.0 MB log only May 26 2013 5:17AM not applicable
dev29 3200.0 MB data only May 26 2013 5:17AM 2316
dev38 400.0 MB log only May 26 2013 5:17AM not applicable
dev29 200.0 MB data only May 26 2013 5:17AM 0
dev38 200.0 MB log only May 26 2013 5:17AM not applicable
dev29 18555.0 MB data only May 26 2013 5:17AM 9156764
dev35 3845.0 MB data only May 26 2013 5:17AM 3921884
dev1 1100.0 MB log only Jun 8 2014 9:06AM not applicable
column1
-----------------------------------------------------
log only free kbytes = 3059998

As seen in your sp_helpdb output, you have 3000Mb of log space, and currently have 2988Mb free, so the problem is not occurring at this time.
Assuming you are not using Sybase Replication Server, what is likely happening is that you have a long running transaction that is keeping the truncation point in the log from moving. This in turn causes the log to fill up before the transaction can commit, and the checkpoint and truncation can occur.
In other words say you have 10 transactions, 1 through 10, executed in order. If 2 through 10 finish, but 1 is still open, the transaction log will not get truncated until 1 finishes.
To see if you have a long running transaction, you will need to check master..syslogshold which shows the oldest running transaction in each database.
There are a couple of things you can try to resolve this issue.
Increase the transaction log size. Currently it appears that you have set the transaction log to be approximately 10% of the size of your data. You could try increasing that to 15-20%, and see if the extra space gives the long running transaction enough time to complete.
The other thing to do is try to figure out what transaction/s are running long, and see if those queries can be optimized to reduce the run time.

Related

Kafka too many open files, many tiny logs, with high ulimit and about 5k segments

It keeps having Kafka reporting "Too many open files". I just restarted clean, but after 10 minutes or so I end up with
lsof | grep cp-kafka | wc -l:
454225
process limits:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 96186 96186 processes
Max open files 800000 800000 files
Max locked memory 16777216 16777216 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 96186 96186 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
I have set retention.hours to -1, as I want to keep all logs from the past. In my server.properties I had segment files of 100mb, but for some reason, Kafka makes 10mb logs. The strange thing is, I "only" have a relatively low number of files in the log directory.
find | wc -l
5884
I don't understand what I am doing wrong here.
I installed the confluent-kafka deb packages on Ubuntu 18.04.
kafka 2.0
messages are about 500bytes each
auto create topic is true
One directory, are my messages too small for the timeindex?
rw-r--r-- 1 2.2K Sep 30 10:03 00000000000000000000.index
rw-r--r-- 1 1.2M Sep 30 10:03 00000000000000000000.log
rw-r--r-- 1 3.3K Sep 30 10:03 00000000000000000000.timeindex
rw-r--r-- 1 560 Sep 30 10:03 00000000000000004308.index
rw-r--r-- 1 293K Sep 30 10:03 00000000000000004308.log
rw-r--r-- 1 10 Sep 30 10:03 00000000000000004308.snapshot
rw-r--r-- 1 840 Sep 30 10:03 00000000000000004308.timeindex
rw-r--r-- 1 10M Sep 30 10:03 00000000000000005502.index
rw-r--r-- 1 97K Sep 30 10:04 00000000000000005502.log
rw-r--r-- 1 10 Sep 30 10:03 00000000000000005502.snapshot
rw-r--r-- 1 10M Sep 30 10:03 00000000000000005502.timeindex
Also added the following lines in server config; index remain 10Mb max
log.segment.bytes=1073741824
log.segment.index.bytes=1073741824
BTW, I am sending messages with timestamps in the past, with log retention of 1000 years.

APDU command to write the Changed PIN into the card

What APDU command gets the PIN from the smart card and write the Changed PIN into the card?
For writing the code on card I have found 80 D4 00 00 08 01 02 03 04 05 06 07 08 to set pin 1 2 3 4 5 6 7 8 but we got 6D 00 in response i.e Instruction code not supported or invalid.
Or are there any WIN APIs that can be used?
Thanks in advance.
Severe misunderstanding: Nothing gets the stored PIN from the card. Using the VERIFY command you can only supply a comparison value and find out, whether it is correct - if it is not, the retry counter will decrease and the PIN may block. There is the standard command CHANGE REFERENCE DATA, see ISO 7816-4, but standard commands have CLA=00 while you currently try CLA=80 (first byte of the command).
6D00 can also be found there and since it means "wrong INS code" the whole command may be wrong. (A PIN consisting of non-printable bytes is also somewhat untypical.)
Without knowing, which card you have and which specification it complies to, you will not make significant progress.
While WINSCARD may be your friend to get the command transported, it will not help in the respect of finding the correct bytes.

Opening a Text File using PHP - Fixing the Format

Basically, my text file contains this info:
WITH PARALLEL AND SERIAL
----- [System Info] -----------------------------------------------------------
Property Value
Machine Type AT/AT COMPATIBLE
Infrared (IR) Supported No
DMI System UUID 809EC223-DAD7DD11-A2F33085-A993FFAC
UUID 23C29E80-D7DA-11DD-A2F3-3085A993FFAC
Disk Space Disk C: 89 GB Available, 97 GB Total, 89 GB Free
Disk Space Disk D: 355 GB Available, 368 GB Total, 355 GB Free
Disk Space Disk F: 274 MB Available, 3837 MB Total, 274 MB Free
Physical Memory 1724 MB Total, 1173 MB Free
Memory Load 31%
Virtual Memory 3619 MB Total, 3184 MB Free
PageFile Name \??\C:\pagefile.sys
PageFile Size 2046 MB
In use 35 MB
Max used 35 MB
Registry Size 3 MB (current), 120 MB (maximum)
Profile GUID {bef54e40-80cb-11e2-a600-806d6172696f}
The system clock interval 15 ms
----- [Motherboard] ---------------------------------------
Property Value
Manufacturer ASUSTeK COMPUTER INC.
Model P8H61-M LX R2.0
Version Rev X.0x
Serial Number 120801441113185
North Bridge Intel ID0100 Revision 09
South Bridge Intel ID1C5C Revision 09
CPU Intel(R) Pentium(R) CPU G645 # 2.90GHz
Cpu Socket
System Slots 4 PCI
Memory Summary
Maximum Capacity 16384 MBytes
Memory Slots 2
Error Correction None
Warning! Accuracy of DMI data cannot be guaranteed
However using this code in PHP to open it:
<?php
if(isset($_POST["submit"])){
$myfile = fopen("baliwag_04162015.txt", "r") or die("Unable to open file!");
echo fread($myfile,filesize("baliwag_04162015.txt"));
fclose($myfile);
}
?>
I get something like this:
WITH PARALLEL AND SERIAL ----- [System Info] ----------------------------------------------------------- Property Value Machine Type AT/AT COMPATIBLE Infrared (IR) Supported No DMI System UUID 809EC223-DAD7DD11-A2F33085-A993FFAC UUID 23C29E80-D7DA-11DD-A2F3-3085A993FFAC Disk Space Disk C: 89 GB Available, 97 GB Total, 89 GB Free Disk Space Disk D: 355 GB Available, 368 GB Total, 355 GB Free Disk Space Disk F: 274 MB Available, 3837 MB Total, 274 MB Free Physical Memory 1724 MB Total, 1173 MB Free Memory Load 31% Virtual Memory 3619 MB Total, 3184 MB Free PageFile Name \??\C:\pagefile.sys PageFile Size 2046 MB In use 35 MB Max used 35 MB Registry Size 3 MB (current), 120 MB (maximum) Profile GUID {bef54e40-80cb-11e2-a600-806d6172696f} The system clock interval 15 ms ----- [Motherboard] --------------------------------------- Property Value Manufacturer ASUSTeK COMPUTER INC. Model P8H61-M LX R2.0 Version Rev X.0x Serial Number 120801441113185 North Bridge Intel ID0100 Revision 09 South Bridge Intel ID1C5C Revision 09 CPU Intel(R) Pentium(R) CPU G645 # 2.90GHz Cpu Socket System Slots 4 PCI Memory Summary Maximum Capacity 16384 MBytes Memory Slots 2 Error Correction None Warning! Accuracy of DMI data cannot be guaranteed
What can I do so that I can achieve this output:
WITH PARALLEL AND SERIAL
----- [System Info] -----------------------------------------------------------
Property Value
Machine Type AT/AT COMPATIBLE
Infrared (IR) Supported No
DMI System UUID 809EC223-DAD7DD11-A2F33085-A993FFAC
UUID 23C29E80-D7DA-11DD-A2F3-3085A993FFAC
Disk Space Disk C: 89 GB Available, 97 GB Total, 89 GB Free
Disk Space Disk D: 355 GB Available, 368 GB Total, 355 GB Free
Disk Space Disk F: 274 MB Available, 3837 MB Total, 274 MB Free
Physical Memory 1724 MB Total, 1173 MB Free
Memory Load 31%
Virtual Memory 3619 MB Total, 3184 MB Free
PageFile Name \??\C:\pagefile.sys
PageFile Size 2046 MB
In use 35 MB
Max used 35 MB
Registry Size 3 MB (current), 120 MB (maximum)
Profile GUID {bef54e40-80cb-11e2-a600-806d6172696f}
The system clock interval 15 ms
----- [Motherboard] ---------------------------------------
Property Value
Manufacturer ASUSTeK COMPUTER INC.
Model P8H61-M LX R2.0
Version Rev X.0x
Serial Number 120801441113185
North Bridge Intel ID0100 Revision 09
South Bridge Intel ID1C5C Revision 09
CPU Intel(R) Pentium(R) CPU G645 # 2.90GHz
Cpu Socket
System Slots 4 PCI
Memory Summary
Maximum Capacity 16384 MBytes
Memory Slots 2
Error Correction None
Warning! Accuracy of DMI data cannot be guaranteed
If you're outputting it in a browser, yeah that would be likely the case. If you want the formatting following inside the browser showing it, you'll need to add a <pre> pre format tag:
echo '<pre>';
echo fread($myfile,filesize("baliwag_04162015.txt"));
echo '</pre>';
Hint: You can also check out the view source and you'll see it there its okay.

Matrix multiplication poor efficiency on a 4 socket NUMA system

I am developing dense matrix multiplication code (https://github.com/zboson/gemm) to learn about parallel programming. I use OpenMP for the threading. My system has four sockets each with Xeon E5-1620 processor. Each processor has 10 cores/20 hyper-threads. So the total is 40 cores/80 hyper threads. When I run my code on a single thread I get about 70% of the peak flops (13 out of 19.2 GFLOPS). However, when I run my code using 40 threads I only get about 30% of the peak flops (185 out of 682.56 GFLOPS). On a seperate system (Sandy Bridge) with only one socket and 4 cores I get about a 65% efficiency with four threads.
I bind the threads to each physical core using system calls. I have tried disabling this and using instead export OMP_PROC_BIND=true or export GOMP_CPU_AFFINITY="0 4 8 12 16 20 24 28 32 36 1 5 9 13 17 21 25 29 33 37 2 6 10 14 18 22 26 30 34 38 3 7 11 15 19 23 27 31 35 39" but these makes no difference. I still get about 30% efficiency (though I can get worse efficiency with other bad binding settings).
What more can I do to improve my efficiency? I understand a first touch policy is used so the memory pages are allocated by the first thread that touches them. When I write out the matrix product maybe I should make a separate output for each socket and then merge the results from each socket in the end?
I'm using GCC 4.8.0 with Linux 64-bit kernel 2.6.32
Edit: I use the following binding for matrix size = 2048x2048
export GOMP_CPU_AFFINITY="0 4 8 12 16 20 24 28 32 36 1 5 9 13 17 21 25 29 33 37 2 6 10 14 18 22 26 30 34 38 3 7 11 15 19 23 27 31 35 39"
This should have threads 0-9 -> node 0, 10-19 node 1, 20-29 node 2, 30-39 node 3.
With this binding I get:
nthread efficiency node
1 77% 0
2 76% 0
4 74% 0
6 62% 0
8 64% 0
10 52% 0
14 50% 0+1
16 30% 0+1
It is reasonable to suspect that the efficiency drop also because of too many cross-socket communications. But setting thread affinity is not enough to avoid these communications, it should be addressed on the algorithmic level, e.g. partition the work in the way to minimize cross-numa-node interactions. The best approach is to implement it in a cache-oblivious way, e.g. parallel it not by rows or columns but by 2d tiles.
For example, you can use tbb::parallel_for with blocked_range2d in order to use cache more efficiently.
Dropped efficiency with bigger level of parallelism can also indicate that there are not enough work to justify overheads from synchronization.

Output from shell_exec() containing accented charcters getting mangled

I've got a command which I'm running from PHP using shell_exec().
Sometimes the output of the command will contain accented characters.
When run from Bash, the output appears correctly. However, when run from shell_exec, the accented characters are lost and the output truncated somewhat.
Example output from Bash:
. D 0 Tue Oct 25 16:45:26 2011
.. D 0 Tue Oct 25 16:45:26 2011
...
Background pres for political speech maggie & gemma.ppt A 3323392 Fri Oct 24 14:31:26 2008
extra listening exercise on la télévision.doc A 24064 Wed Jan 11 08:12:32 2006
gender of nouns.ppt A 42496 Fri Sep 10 07:55:42 2004
...
63999 blocks of size 8388608. 36710 blocks available
Example output from shell_exec - note what happens to télévision, vidéo etc.:
. D 0 Tue Oct 25 16:45:26 2011
.. D 0 Tue Oct 25 16:45:26 2011
...
Background pres for political speech maggie & gemma.ppt A 3323392 Fri Oct 24 14:31:26 2008
extra listening exercise on la t gender of nouns.ppt A 42496 Fri Sep 10 07:55:42 2004
...
63999 blocks of size 8388608. 36710 blocks available
The solution that worked for me was to run these commands before shell_exec, to make sure that the correct locale was being used:
$locale = 'en_GB.utf-8';
setlocale(LC_ALL, $locale);
putenv('LC_ALL='.$locale);
Presumably you can just change en_GB to whatever your language is. I noticed that the locale string seems to be case sensitive.

Resources