How to Upgrade MySQL from 5.6 to 5.7 on CentOS 7

Here is my worked commands:

mysql -V
yum repolist all | grep mysql
sudo yum-config-manager --enable mysql57-community-dmr
sudo yum-config-manager --enable mysql57-community
sudo yum-config-manager --disable mysql56-community
yum repolist all | grep mysql
sudo yum install mysql-community-server
sudo service mysqld status
sudo service mysqld stop
sudo service mysqld start
mysql -V
sudo mysql_upgrade -u root -p --force
sudo service mysqld stop
sudo service mysqld start


This command is used for make your old user account can be upgraded to new version to be used.

sudo mysql_upgrade -u root -p –force

Spring security note – Create a simple authentication-manager by get user from DB table – 1

By WZH. Aug 2016

We want authenticate user against with a DB rather from a hard code user service with username and password. So system can work like a production system.

Refer to this article and its code:

Here is key points:

            <jdbc-user-service data-source-ref="dataSource"
                    "select username,password, enabled from users where username=?"
                    "select username, role from user_roles where username =?  " />

users-by-username-query and users-by-username-query are two queries that to get user + password   and user + role from DB.  You do not need to use exactly field names for these in DB define , but SQL return data should there 3 items in order for users and 2 items in order for authorities.

After you make this part correct and your DB ok, you should be able to implement authenticate from DB easily. But this password could be clear pass save in DB.

Question – what if your password field is MD5 of real password in DB to ensure security how to do it in Spring security?

Let me show you one example that what you should do, add this line at <authentication-provider> first:

            <password-encoder hash="md5"/>
            <jdbc-user-service data-source-ref="mySQLDataSource"
                    "select loginId, password, true from users where loginId=?"
                    "select loginId, authority from user_roles where loginId =?  " />

<password-encoder hash=”md5″/> will tell spring security that password read from DB is MD5 hash.  So spring will compare MD5 of the LoginForm input password with the MD5 read out from DB to do the authentication. But when you create/ register a new user into your DB, you need to calculate the MD5 by your code. You have to ensure your MD5 hash result is same with Spring.

Remember this piece of code:

PasswordEncoder encoder = new Md5PasswordEncoder();
String hashedPass = encoder.encodePassword("origClearPassword", null);

then you can save hashedPass to you DB as the “password” for hashed authentication.


Refer too:




The idle mySQL connection pool closed issue


This was one issue that faced sometimes when you forget something in the system config. Log it as it might help reminding in future.


After several hours or days no body using system, the first user try to login and always failed, until several times trying.


A further check the log found error of  java.sql.SQLException: Connection already closed.

So the real real is that JDBC connection pooling is closed after too long time idle.


DB has ability to run a  validationQuery to detect the connection closed or not.  If the validation query fails, the bad/closed connection is dropped and another connection is created to replace it. So it will ensure connection is ready before a code query.

The validation query is a query run by the data source to validate that a Connection is still open before returning it.

For my MySQL case, you need to add this for the JDBC define in the Spring config:

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource"
    <property name="validationQuery" value="select 1"/>


Refer to :


About MySQL index max length


To fast the search speed such as sort, index is very important tool must to use(INDEX, PRIMARY KEY and  UNIQUE). By default (5.6 version) the InnoDB support the 767 bytes length (<256 UTF8 chars) for a index. But if your text is a bit longer  and you need to index them, what you should do?

You have to understand that two file format concepts of Antelope(default) and Barracuda(start from version 5.6 mySQL). Refer to the MySQL manual :

  • Antelope is the original InnoDB file format, which previously did not have a name. It supports COMPACT and REDUNDANT row formats for InnoDB tables and is the default file format in MySQL 5.6 to ensure maximum compatibility with earlier MySQL versions that do not support the Barracuda file format.

So you see that new file format can support up to 3027 bytes for index, which will improve the search speed a lot for long text. Here is the steps to change to make the large index works:

You can change the config in the my.cnf (/etc/mysql/my.cnf)


or you can dynamic change at the console:

SET GLOBAL innodb_file_format=Barracuda;
SET GLOBAL innodb_file_per_table=ON;     (default is on already)
SET GLOBAL innodb_large_prefix=1
All of those settings are captured as a table is created. (You could change existing tables using ALTER.)

About the
ROW_FORMAT=DYNAMIC;      ( or COMPRESSED  to save space but cost CPU)
ROW_FORMAT=DYNAMIC can be append to the end of CREATE or used in the ALTER TABLE.  MySQL WorkBench also support you this at alter table at “options” tab.

Improve the Mybatis performance


Some tips on improve the Mybatis performance.

  1. Set fetchSize to a considerable amount.
  2. Only retrieve needed columns from DB and map them to the Java object.
  3.  Try to reduce the nested selects in one user interaction, try to use more nested joins  in the SQL level in one time DB call.
  4.  Use cache integration to improve re-read speed.  Such as Memcached or  Ehcache
  5. SQL  and DB self optimization, such as index for sort and order



How to run Memcached on Mybatis


1. Install Memcache

To start, install memcached via apt-get. such as in the Ubuntu 12.04

sudo apt-get install memcached

It auto starts the memcached

ps -ax | grep memcac
21199 ?        Sl     0:00 /usr/bin/memcached -m 64 -p 11211 -u memcache -l

Refer to official Mybatis link:
Add the jar to your maven

 and then add the memcached to your mapper if you want which MyBatis mapper to use it.
<mapper namespace="org.acme.FooMapper">
  <cache type="org.mybatis.caches.memcached.MemcachedCache" />
Create a file and put to your class path, eg resources folder.
# any string identifier
# space separated list of ${host}:${port}
org.mybatis.caches.memcached.expiration = 600
org.mybatis.caches.memcached.asyncget = true
# the expiration time (in seconds)
org.mybatis.caches.memcached.timeout = 600
org.mybatis.caches.memcached.timeoutunit = java.util.concurrent.TimeUnit.SECONDS
# if true, objects will be GZIP compressed before putting them to Memcached
org.mybatis.caches.memcached.compression = false

In fact if you add multiple cache server at org.mybatis.caches.memcached.servers, it will has fail over ability among them. auto continue using live one if one die.

This intergration in fact based on the Spymemcached, is an asynchronous supported, single-threaded Memcached client. When you call any caching-related method on spymemcached’s MemcachedClient, it will be handled asynchronously. The client call method handles writing the details of the operation that should be performed into a queue and returning the control back to the client making the call. The actual interaction with the Memcached server, meanwhile, is handled by a separate thread that runs in the background.

My testing prove that it can improve the loading DB data at least 50% reading time if data has been in memcached. So this also proves that Mybatis self cache is not enough  big because it is not designed for cache only.


How to find two DB difference script in MySQL

By W.Zh

In MySql , it is often you want to find two version of DB to create the script , especially for DB migrate and schema upgrade. Here is the way to use the Workbench to create the SQL file for it:

  1. On the Workbench, Going to model mode(Left bottom corner to create one Model if do not have), only in model mode you can access it.
  2. Menu –> Database -> Synchronize with any source–> Select Database Updates:
    Check : “Source”- Live Database Server
    “Destination”-Live Database Server
    “Send Updates to:” –> Alter Script Files:

    If you just want to save DB diff to sql, not immediately to sync one DB to another DB, such as find schema change between Staging and Dev server . Then remember to select     “Send Updates to:”, instead of directly sync to target DB.

  3. Then after first step. just follow the wizard to step by step to get your diff files.

How to make EasyUI JS Datagrid to support filter and scrollview together


Issue: When you use the easyUI datagrid, you have millions of rows, you need to use pagination or scroll-view to support it.  But you might found that your filter will not work properly after you add scrollview or pagination feature.

You will find the filter may conflict with the pagination or table scroll feature.  Most common thing will see is filter sometimes work, but some times not. Problem occurs randomly on UI.

Reason: Most reason is because you want use the local filter but the datagrid data are gotten from remote/server side json. Then you will see this problem easily.

Solution: You need to run the filter and the pagination/scroll  both from server side or both from local. Here is some sample code I created to support the filter and scroll both together from remote server side.

Two more extension js files you need:


HTML for table:

<table id="data_list_table" style="height:750px"  ></table>

JS for load table

jQuery(document).ready(function() {
                url : '<%=path%>/web/sampleController/getSampleByPage',
                toolbar : '#data_list_table_toolbar',
                nowrap: true,
                rownumbers : true,
                view: scrollview,
                remoteFilter: true,
                pagination : false,
                pageSize : 30,
                            {field:'sampleid',title:'DB ID',width:35},
                            {field:'samplename',title:'Slot Name',sortable :true, order : 'asc', width:110},
                            {field:'sampletype',title:'Slot Type',sortable :true, order : 'asc', width:95},
                            {field:'sampledesc',title:'Slot Description', width:250}
               // filter for the table
             var dg = $('#data_list_table').datagrid().datagrid('enableFilter');

So every time you load the table and also move mouse to scroll next page data, you will get the http request contains these data in the request:

filterRules – contains the filterRules data and you need to convert to your SQL where conditions.

here is a sample to process it in Spring and JAVA:

@RequestMapping(value = "/getSampleByPage", method = { RequestMethod.POST,
            RequestMethod.GET })
    public EASYUIJSONTransport getSampleByPage(SearchInput search,
            HttpServletRequest request) throws Exception {
        int total = sampleService.totalCountSample();
        List<Sample> samples = searchbypage(search); // do by your self JPO access method

        return new EASYUIJSONTransport(total, slots);
public class SearchInput {
    private String sort = "samplename";
    private String order = Common.sort_asc;
    private int page = Common.default_page;
    private int rows = Common.default_rows;
    private String filterRules = "";


Reference :

Create Data Source BeanAlreadyExistsException Error

By W.ZH Dec 2011


When I use my python script to install my data source, I get this error:


Bean already exists: “weblogic.j2ee.descriptor.wl.JDBCPropertyBeanImpl@2c56a3a([gis_dbSource]/JDBCDriverParams/Properties/Properties[user])”

So I suspect there is same named DS not be cleaned, so go to WLS console try to delete DS. After I delete and restart my WLS, my script gets this error again! And again! I met this error years ago,, but just can not dig out this time…


Do to WLS’s domain config folder  under jdbc sub-folder , you can see your DS ‘s xml file still there. But if you open the config.xml, it does not include your DS in the config.xml(so Conole ui can see it.)


So remove that data srouce xml in the jdbc sub-folder , and double check the config.xml does not have your DS item, then restart the WLS and try.

Service Name VS SID – Database Access

By W.ZH, Dec 2011


For using service name, if you use sqlplus to open DB, then you can use:

sqlplus sys/$system_pass@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$db_ip)(PORT=$db_port))(CONNECT_DATA=(

for config to access from jdbc code or WLS Datasource for the DB URL:



But if you are using SID – old way:

sqlplus sys/$system_pass@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$db_ip)(PORT=$db_port))(CONNECT_DATA=(

for config to access from jdbc code or WLS Datasource for the DB URL:


Please note one is “:”, one is “/” in URL!


So what is difference between SID and Service Name access? Basically SID is for single DB instance but Service Name can be for cluster environment. You can read Oracle DB book for them.