在过去的四天里,MySQL一直在运行脚本,像一天/一天
这是错误日志
key_buffer_size=134217728 read_buffer_size=1048576 max_used_connections=39 max_threads=100 threads_connected=34 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 336508 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x92025f38 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x95dce36c thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2d) [0x6b65ad] /usr/sbin/mysqld(handle_segfault+0x494) [0x3823d4] [0x110400] /usr/sbin/mysqld(MYSQLparse(void*)+0x6aa) [0x3b42da] /usr/sbin/mysqld(mysql_parse(THD*, char const*, unsigned int, char const**)+0x23e) [0x39ce6e] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0xf35) [0x39df25] /usr/sbin/mysqld(do_command(THD*)+0xf3) [0x39f0e3] /usr/sbin/mysqld(handle_one_connection+0x2a0) [0x38dbd0] /lib/tls/i686/cmov/libpthread.so.0(+0x596e) [0x93d96e] /lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xd78a4e] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x86982ef4 is an invalid pointer thd->thread_id=2906 thd->killed=NOT_KILLED
该框运行在2GB内存,通过我的计算,它不应该有最大内存的问题。 我已经专门降低了内存要求,但仍然收到错误。
mysql> show variables like "sort_buffer%"; +------------------+---------+ | Variable_name | Value | +------------------+---------+ | sort_buffer_size | 1048576 | +------------------+---------+
它今天坠毁在这个SQL查询
ALTER TABLE FieldDefaultValue MODIFY value_field varchar(2000) CHARACTER SET utf8 collate utf8_bin;
任何人有类似的经验?
编辑:
有问题的表实际上不包含太多的数据,数据库有更大的表:
mysql> desc fielddefaultvalue; +----------------------+---------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------------+---------------+------+-----+---------+----------------+ | fielddefaultvalue_Id | bigint(20) | NO | PRI | NULL | auto_increment | | version | bigint(20) | NO | | NULL | | | value_field | varchar(2000) | YES | MUL | NULL | | | optimistic_version | bigint(20) | NO | | NULL | | | property_fk | bigint(20) | YES | MUL | NULL | | | esg_fk | bigint(20) | YES | MUL | NULL | | +----------------------+---------------+------+-----+---------+----------------+ 6 rows in set (0.02 sec) mysql> select count(*) from fielddefaultvalue; +----------+ | count(*) | +----------+ | 690 | +----------+ 1 row in set (0.00 sec)
它也没有多次插入(400-500)的小数据,但不是所有的时间,相同的脚本可以正常运行一次或崩溃
编辑2:崩溃恢复后错误日志还报告:
InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row.
my.cnf中
lower_case_table_names = 1 key_buffer = 16M key_buffer_size = 128M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 512 thread_concurrency = 4 sort_buffer_size = 1M read_buffer_size = 1M table_open_cache = 512 read_rnd_buffer_size = 8M innodb_file_per_table = 1 open_files_limit = 65536 default_character_set=utf8 query_cache_limit = 1M query_cache_size = 64M expire_logs_days = 10 max_binlog_size = 250M innodb_buffer_pool_size = 256M innodb_additional_mem_pool_size = 20M
编辑:5小时后
它只是在同一个“常规”脚本上再次崩溃,它是date列上的25.000行更新脚本。
相同的错误信息:
InnoDB: Log scan progressed past the checkpoint lsn 186 4056481576 110620 17:30:52 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Read
有趣的是,我今天运行了这个脚本,并没有失败,但现在已经完成了。
最可能的解释是地址空间不足; 请张贴您的整个my.cnf。
在生产环境中运行32位操作系统并不是一个好主意。
但是,你应该做的是:
如果您可以使用支持的操作系统上的标准 Oracle版本来重现该错误,那么您没有内存/地址空间不足,并且没有硬件故障,那么您可以将错误提交给Oracle。
最好的办法是用最少量的数据/表格大小重现测试用例。
听起来像你的innodb_log_file_size不够大 – 尝试在my.cnf中使用256 MB:innodb_log_file_size = 256M
您需要干净地关闭它,删除旧的日志文件,然后重新启动 – mysql将重新创建新的日志文件。
奇怪…我不知道如何优化ALTER TABLE实际上是在MySQL上。 也许它消耗了很多的力量。 如果表中包含大量数据,请尝试将所有数据移动到临时表中并清空主表中的数据。 然后做你的改变表,并推回数据。 如果它必须在每一行上做工作,那么你可以像这样分割工作,并一次做一些记录。