top button
Flag Notify
    Connect to us
      Site Registration

Site Registration

SHOW INNODB STATUS - FILE I/O - OS reads/writes/syncs?

0 votes
346 views

I've searched but with no luck... what do exactly these variables mean:

1343928 OS file reads, ********** OS file writes, 19976022 OS fsyncs

I am wondering if my innodb_buffer_pool setting is not to low. Does 'file reads' show number of times innodb files have been read into memory from server's start? What about file writes/fsyncs?

posted Jun 21, 2013 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

+1 vote
 
Best answer

these are the total number of reads/writes/fsyncs (number of system calls actually?) since the server started (or maybe last FLUSH call?) and not very meaningful by themselves without knowing the time span it took to come up to those counter values.

The per second values on the following line are much more interesting.

http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/

has a pretty good description of the SHOW ENGINE INNODB STATUS output, even though it is not too detailed in this specific section.

answer Jun 21, 2013 by anonymous
As I can see the changes in these values are use by percona cacti monitoring templates to graph "InnoDB I/O".
Can anyone answer the question finally? ;)
Similar Questions
+1 vote

I'm trying to figure out how InnoDB executes a SHOW CREATE TABLE query so I can figure out what could possibly have made them suddenly slow down?

mysql> SHOW CREATE TABLE my_table;
...
1 row in set (37.48 sec)

We tend to execute many of these statements concurrently, but it's never been a problem until recently. I upgraded the IO subsystem, and our statistics indicate that it's not maxing out IO (at least IOPS).

+1 vote

I have in my server database some tables that are too much big and produce some slow query, even with correct indexes created.

For my application, it's necessary to have all the data because we make an authentication process with RADIUS users (AAA protocol) to determine if one user can or not navigate in Internet (Depending on the time of all his sessions).

So, with 8GB of data in one table, what are your advises to follow? Fragmentation and sharing discarded because we are working with disk arrays, so not apply. Another option is to delete rows, but in this case, I
can't. For the other hand, maybe the only possible solution is increase the resources (RAM).

Any suggestions?

+1 vote

Below table contains billion of rows,

CREATE TABLE `Sample1` (
  `c1` bigint(20) NOT NULL AUTO_INCREMENT,
  `c2` varchar(45) NOT NULL,
  `c3` tinyint(4) DEFAULT NULL,
  `c4` tinyint(4) DEFAULT NULL,
  `time` bigint(20) DEFAULT NULL,
  PRIMARY KEY (`c1`),
  KEY `varchar_time_idx` (`c2`,`Time`),
  KEY `varchar_c3_time_idx` (`c2`,`c3`,`Time`),
  KEY `varchar_c4_time_idx` (`c2`,`c4`,`Time`),
  KEY `varchar_c3_c4_time_idx` (`c2`,'c3', `c4`,`Time`),
) ENGINE=InnoDB AUTO_INCREMENT=10093495 DEFAULT CHARSET=utf8

Four multi column index created because having below conditions in where

1) c2 and time
2) c2 and c3 and time
3) c2 and c4 and time
4) c2 and c3 and c4 and time

Cardinality wise c2, c3 and c4 are very low. (ex: Out of one million c2, c3 and c4 have 50 unique column in each)

Column time contains mostly unique fields.

Select, insert and update happened frequently.

Tables has 5 indexing fields(4 multi column). Due to this, 1) Insert and update on index fields become costlier. 2) As the table keep on growing (Nearly one billion rows), Index size also increase more rapidly.

Kindly suggest good approach in mysql to solve this use case.

+1 vote

I was trying to optimize the InnoDB tables. I have executed the next query to detect what are the fragmented tables.

SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql") AND Data_free > 0

After that, I have seen that there are 49 fragmented tables. With one table, I have executed "optimize table table_name;" and "analyze table table_name;". The result is the same, the table continuous fragmented.

Any suggestions? I have followed the mysqltuner recommendations...

...