Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • MySQL Insert Statement to Add Rows
    This tutorial explains the MySQL INSERT command to insert single and multiple rows in a table. Here, you’ll find some unique ways with different variations for adding records with fully working examples. 1. INSERT Statement Syntax 2. INSERT Single Row 3. INSERT Default Values 3. INSERT Date Columns 4. INSERT Multiple Rows Let’s now read and understand each of the section one by one. MySQL INSERT statement As stated initially, the INSERT command is a built-in MySQL statement which inserts the specified records into a given table. So, let’s first check the details and see how to use the INSERT The post MySQL Insert Statement to Add Rows appeared first on Learn Programming and Software Testing.

  • MySQL: The Impact of Transactions on Query Throughput
    Recently I had a customer where every single query was running in a transaction, as well as even the simplest selects. Unfortunately, this is not unique and many connectors like Java love to do that. In their case, the Java connector changed autocommit=off for the connection itself at the beginning, and as these were permanent connections they never or almost never reconnected. In the slow query log we could see after every select there was a commit. So why is this a problem? Test Case Like always, the best way to deal with a problem to test it. I have created two EC2 instances t3.xlarge with Ubuntu, one for application and one for the databases.  I have used sysbench to run my tests. I have created a table with 1 million records and was running simple primary key point selects against the database. I was using three threads and running only 1 select per transactions,  and every test was running for 60 seconds. I ran every test 3 times and took the average number of these three runs. These are the only MySQL variables I have changed: innodb_buffer_pool_size=5G innodb_adaptive_hash_index=off query_cache_size=0 query_cache_type=0 The dataset could fit in memory as there were no disk reads. I have tested four cases: PK selects – without transaction (select only) PK selects – inside a transaction (begin,select,commit) PK selects – autocommit=off (select,commit) PK selects – Read-Only transaction (START TRANSACTION READ ONLY, select, commit) Disclaimer I was not trying to do a proper performance test to see the maximum performance on the server. I was only trying to demonstrate the impact on the executed query number if we are running every single query in a transaction in a limited time window. Test results on MySQL 5.6 Because my customer was running on MySQL 5.6 I did my first tests on that version as well. Here we can see the average transactions per second (inside InnoDB everything is a transaction even if we do not start it explicitly). In the same time window, sysbench could run more than twice as many PK lookups without transactions than in transactions. You can see there are big differences here. But what does that mean? My Selects are slower? It depends on your point of view, but if you measure the whole transaction time, yes it takes more time, but if you measure the single select statement that should take the same amount of time. So your single selects are not going to be slower. Let me try to oversimplify this. Let’s say your database server can run only 1 query per second. That means in 100 seconds it can run 100 queries. What if we are using transaction? In the first second the database server will run the begin, then it runs the select , and after that in the third second, it will run the commit.  So it will run 1 select in every three seconds. In 100 seconds 33 begin, 33 select, 33 commit. Your query time is not going to be higher but your throughput is going to be impacted. If there is no real reason (example repeatable read) why your selects should run in transactions, I would recommend to avoid them, because you can save a lot of extra roundtrip between the application and the database server, some CPU time, and even begin commits very fast, but in large scale you can save time as well. What is a transaction? From InnoDB manual: Transactions are atomic units of work that can be committed or rolled back. When a transaction makes multiple changes to the database, either all the changes succeed when the transaction is committed, or all the changes are undone when the transaction is rolled back. Database transactions, as implemented by InnoDB, have properties that are collectively known by the acronym ACID, for atomicity, consistency, isolation, and durability. InnoDB associates every transaction with a transaction ID (TRX_ID field is part of every single row). That transaction ID is only needed for operations that might perform writes or locks like Select ... for update.  Also, this transaction ID is used to create a read view, a snapshot of the database, as this is part of the MVCC mechanism. MVCC keeps information of the old pages until the transaction is running, an example is to be able to rollback. These pages also used to provide consistent reads. Even if this is just a single long-running select, because we started a transaction, InnoDB has to keep track of these changes. There are other internal processes happening when we explicitly start a transaction, which might be not necessary for a simple select query. This is one of the reasons why they developed Read-only transactions to avoid these side effects. When autocommit is enabled and we do not start transactions explicitly, the insert/delete/update operations are still handled inside MySQL like transactions, but InnoDB tries to detect non-locking select queries which should be handled like Read-only transactions. I was curious if is there any difference/improvements between the versions so I re-ran my tests on multiple MySQL versions. Testing different versions I have tested: MySQL 5.6 Percona Server 5.6 MySQL 5.7 Percona Server 5.7 MySQL 8.0 Percona Server 8.0 What we can see here? We can see that a big difference is still there in all versions, and actually, I was not expecting it to disappear because from the application point of view it still has to do the round trips to the database server. We knew there is a regression in the newer version of MySQL regarding the performance of point lookups. Mark Callaghan and many other people already blogged about this, but as we can see the regression is quite big. I am already working on another blog post where I am trying to dig deeper on what is causing this degradation. The newer version of MySQL servers are more optimized on high concurrent workloads, and they can handle more threads at the same time as the previous versions. There are many detailed blog posts in this area as well. We can also see, for these simple queries starting Read-only transactions, they are actually a lit bit slower than starting normal transactions. The difference is between 1-4% while in MySQL 8.0 it is around 1% slower. That is a very small difference but is continuous through all the tests. Conclusion If you are running every single select query in a transaction your throughput could be definitely lower than without transactions. If your application requires higher throughput, you should not use explicit transactions for every select query.

  • How to fix error when MySQL client fails to load SQL file with Blob data
    In one of my latest database restore jobs, I was helping a MySQL client with issues related to mysqlbinlog and I wanted to share it here. In case you didn’t know, MySQL is a simple SQL shell with input line editing capabilities, while mysqlbinlog is a utility for processing binary logs a MySQL server. In this case, the server was MariaDB, but the utilities are the same. The database version was 10.1.38-MariaDB. So, why use mysqlbinlog? There are many reasons for using mysqlbinlog to process binary log files, but in this case, it was used for point-in-time recovery. Let’s say you have an erroneous transaction that you run at 3:05 p.m. and your last full backup was run at 12 p.m. To be able to restore your database up to 3:05 p.m., you will need to restore the full backup that you took at 12 p.m. and then apply the events from your binary logs up to the time before you ran the erroneous transaction. This procedure is only possible if you still have the binary logs generated between 12 p.m. and 3:05 p.m. on the server. Applying the events to the database can be done by using MySQL client. Let’s say you have restored the full backup and it’s time to apply the missing events after 12 p.m., but before the erroneous transaction. Using mysqlbinlog, you can parse the binlog and confirm the position that is just before your erroneous transaction. After you find the position, your command to generate SQL from the binary log will look something like this: mysqlbinlog mysql-bin.000072 --stop-position=16208282 --result-file=binlog72.sql Now that you have the SQL file that will apply all missing events since your last backup, you will want to run this on the MySQL server. We use MySQL client for the same. Typically, it is something like: mysql -uadmin -p < binlog72.sql But in our case, there were errors after running this command. The error is shown below: Enter **************** at line 66848: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'INSERT into pythian_msq(ID, KEY, REGION, CREATED_DATE, PRIORITY, BYTES) values ('b67d8bd1-f8f7-8ffb-d2a4-88da8cf14b21', 'd4d0a754-019b-39af-b568-2f2bf93b2845', '2.8.0', 1553862137605, NULL, _binary '??. Obviously, something was wrong with binary data as the error message was showing “–binary-mode is not enabled” but the other message was also confusing –  “unless MySQL is run in non-interactive mode”. In our case, per my understanding, we were running MySQL in non-interactive mode. Still, the error message was not easy to understand. ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode From the manual on –binary-mode By default, ASCII ‘\0’ is disallowed and ‘\r\n’ is translated to ‘\n’. This switch turns off both features, and also turns off parsing of all client commands except \C and DELIMITER, in non-interactive mode (for input piped to MySQL or loaded using the ‘source’ command). This is necessary when processing output from mysqlbinlog that may contain blobs. The table was showing an error with the below design: CREATE TABLE `pythian_msq` ( `ID` char(36) COLLATE utf8_unicode_ci NOT NULL, `KEY` char(36) COLLATE utf8_unicode_ci NOT NULL, `CREATED_DATE` bigint(20) NOT NULL, `PRIORITY` int(11) DEFAULT NULL, `SEQUENCE` bigint(20) NOT NULL AUTO_INCREMENT, `BYTES` blob, `REGION` varchar(100) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`KEY`,`ID`,`REGION`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The column that was failing in the insert is defined as “blob”. According to quick research on this and based on the error message, the next command we used was with –binary-mode=1 mysql -uadmin -p --binary-mode=1 < binlog72.sql Again, there was an error when trying to execute the SQL file with MySQL client. The error is now showing bad SQL syntax: Enter **************** 1064 (42000) at line 66849: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'VJKF3BY7', 'solution-center', 0, NULL, 'idPrefixPersist', '44a6a325-d17d-4ce9-b7' at line 1 In another try for the same restore, we used the command where we pipe the output from mysqlbinlog to MySQL client: mysqlbinlog mysql-bin.000072 --stop-position=16208282 | mysql -uadmin -p --binary-mode=1 After some more tries on this, we were not able to make this SQL file run. We did some checks to see if the binary log was corrupted, in case there was some corruption in the blob column, but none of that was the case. Our next try was to source the file after we connect to MySQL as client. mysql -uadmin -p MariaDB [test]> source binlog72.sql The server starts processing the events from the SQL file and then it errors again:...... Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 Query OK, 0 rows affected (0.01 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) ERROR at line 66848 in file: 'binlog72.sql': ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'INSERT into pythian_msq(ID, KEY, REGION, CREATED_DATE, PRIORITY, BYTES) values ('b67d8bd1-f8f7-8ffb-d2a4-88da8cf14b21', 'd4d0a754-019b-39af-b568-2f2bf93b2845', '2.8.0', 1553862137605, NULL, _binary '??. In between each of these tests, we had to restore the latest backup and try again. Our next try was to start the MySQL client with –binary-mode=1 and then source the SQL file. The commands are: mysql -uadmin -p --binary-mode=1 MariaDB [test]> source binlog72.sql With the last command, the database does not produce any errors and we were able to restore the database up to the point of time before an erroneous transaction was run. Conclusion While most of the MySQL tools seem to be working with no major issues and are very straightforward for most of the DBAs, we may run into a use case that requires a more careful review of what is wrong and where we got the errors. In most of the cases, the error is showing some information about what is wrong and we can just follow the solution. In other cases, the error may be misleading or verbose. The only way of making the MySQL client work with Blob data in non-interactive mode was to generate the SQL file using mysqlbinlog and then source the SQL file while connected to MySQL client with –binary-mode=1. I hope this will save you some time if you are running into the same issues while doing PITR for your database.

  • Debian 10 released with MariaDB 10.3
    The Debian project announced their 10th release, code name Buster, on July 6th 2019. Debian 10 ships with MariaDB 10.3 and Galera. There is no separate MariaDB Connector C package, but instead MariaDB 10.3 includes MariaDB Connector C 3.0. Like most other popular Linux distributions, Debian prefers MariaDB over the Oracle owned alternative and this […] The post Debian 10 released with MariaDB 10.3 appeared first on MariaDB.org.

  • Hibernate Tip: How does Hibernate’s native ID generator work
    The post Hibernate Tip: How does Hibernate’s native ID generator work appeared first on Thoughts on Java. Hibernate Tips is a series of posts in which I describe a quick and easy solution for common Hibernate questions. If you have a question for a future Hibernate Tip, please post a comment below.   Question: One of the readers of my article about using Hibernate with a MySQL database asked the following question: What is the difference between the following two recommended approaches concerning the primary keys? @GeneratedValue(strategy = GenerationType.IDENTITY) @GeneratedValue(generator = "native") @GenericGenerator(name = "native", strategy = "native")   Solution: As so often in software development, the answer to that question is: “It depends …”. In this case, it depends on the Dialect that you configured in your persistence.xml.   The native strategy When you use a @GenericGenerator that references the native strategy, Hibernate uses the strategy natively supported by the configured Dialect. You can find the corresponding code in the of the Dialect class. Here’s is the code that’s used in Hibernate 5.4. public String getNativeIdentifierGeneratorStrategy() { if ( getIdentityColumnSupport().supportsIdentityColumns() ) { return "identity"; } else { return "sequence"; } } For all commonly used databases, except MySQL, this method returns the String “sequence”. If you’re using a MySQL dialect, it returns “identity”.   Using the native strategy with different dialects Let’s use the following, simple entity with a MySQL and a PostgreSQL database. @Entity public class Author { @Id @GeneratedValue(generator = "native") @GenericGenerator(name = "native", strategy = "native") private Long id; @Version private int version; private String firstName; private String lastName; ... } When you persist a new Author entity using the PostgreSQL dialect, you can see that Hibernate uses the sequence native to generate the primary key value. And in the next step, it inserts the new record. 14:03:27,709 DEBUG [org.hibernate.SQL] - select nextval ('native') 14:03:27,742 INFO [org.thoughts.on.java.model.TestIdentityStrategy] - After persist 14:03:27,758 DEBUG [org.hibernate.SQL] - insert into Author (firstName, lastName, version, id) values (?, ?, ?, ?) If you use a MySQL database instead, you can see that Hibernate utilizes an auto-incremented column instead. 14:05:15,739 DEBUG [org.hibernate.SQL] - insert into Author (firstName, lastName, version) values (?, ?, ?) 14:05:15,760 DEBUG [org.hibernate.id.IdentifierGeneratorHelper] - Natively generated identity: 1   The differences between native and identity strategy So, as long as you use both mappings with a MySQL database, the result will be the same. But there are still a few differences: The behavior of the native strategy changes if your database dialect returns a different strategy as the natively supported one. That could happen because you now use a different database dialect or the internal implementation of the dialect changed. The @GenericGenerator mapping is much harder to read because it depends on the implementation of your database dialect. The @GenericGenerator annotation is Hibernate-specific. So, you can’t use this mapping with any other JPA implementation.   Mapping Recommendations If you’ve read some of my other tutorials, you can probably guess my preferred mapping. I strongly recommend using the IDENTITY mapping instead of the native one. There are multiple reasons for that. The most obvious ones are the better readability and better portability to other JPA implementations. But you also need to keep in mind that the generation of primary keys can have a significant impact on the performance of your application. I, therefore, want to define the strategy explicitly and don’t rely on any internal implementations. I don’t see the adaptability of the native strategy to different database dialects as a huge advantage. For almost all dialects, Hibernate uses the SEQUENCE strategy anyways. And if you need to support MySQL databases, you can easily override the primary key strategy in your mapping file. Get this Hibernate Tip as a printable PDF!Join the free Thoughts on Java Library to get access to lots of member-only content, like a printable PDF for this post, lots of cheat sheets and 2 ebooks about Hibernate.Join Now! Already a member? Login here. Learn more: If you are interested in primary key mappings, you should also read the following articles: 5 Primary Key Mappings for JPA and Hibernate Every Developer Should Know How to generate primary keys with JPA and Hibernate How to generate UUIDs as primary keys with Hibernate  Hibernate Tips Book Get more recipes like this one in my new book Hibernate Tips: More than 70 solutions to common Hibernate problems. It gives you more than 70 ready-to-use recipes for topics like basic and advanced mappings, logging, Java 8 support, caching and statically and dynamically defined queries.Get it now as a paperback, ebook or PDF. The post Hibernate Tip: How does Hibernate’s native ID generator work appeared first on Thoughts on Java.

Copyright © 2019 Global Relief Center. All Rights Reserved.