2

I have a very simple query that inserts every api call to messageRequest table. it can go maximum of 1000 requests or more per second.

for example, http://myapplication.com/sendMessage.php?phone=123&gsm=mobitel&message=helloworld

assume, anyone with our app in mobile will call this URL. and when i make like 1000 calls via

ab -n 1000 -c 10 "http://myapplication.com/sendMessage.php?phone=123&gsm=mobitel&message=helloworld"

The query is insert into messageRequest values(null, phone, gsm, message);

it works fine, but the issue is while this many concurrent calls are there, when i try to use my activity log page in this URL http://myapplication.com/apiLogs.php to see how many SMS api calls are received in last 30 minutes.

It just queries SELECT count(*) from messageRequest where created_at > '2016-10-01 12:01:00'

it takes like 100 seconds to return result. and when i stopped making concurrent calls via ab -n command its working fine. so i assumed when 1000 concurrent calls happens and a record is inserted for every call, MYSQL table messageRequest is getting locked.

to avoid this problem,

i changed this to multiple sessions, for example to sendMessage.php i made it run with user1, and for the apiLogs.php i made it with user2. so 2 diff sessions can access the same table to read and write simultaneously but that didnt help too.

sendMessage.php

define('DB_HOST', 'localhost');
define('DB_USER', 'user1');
define('DB_PASS', ''); 

apiLogs.php

define('DB_HOST', 'localhost');
define('DB_USER', 'user2');
define('DB_PASS', '');

i am confused how to make it work without any waiting time while so many concurrent calls?

Update

sendMessage.php code

define('DB_HOST', 'localhost');
define('DB_USER', 'user1');
define('DB_PASS', 'user1');

 mysql_connect(DB_HOST, DB_USER, DB_PASS);
 mysql_select_db('smsApp');


     mysql_query(
        "insert into messageRequest values (
            NULL,
            '{$_REQUEST['phone']}',
            '{$_REQUEST['gsm']}',
            '{$_REQUEST['message']}',
            '{$_REQUEST['text']}',
            now()
            );
        "
    );

And i am making concurrent request with this command

ab -n 1000 -c 10 "http://myapplication.com/sendMessage.php?phone=123&gsm=mobitel&message=helloworld&text=123"

And while is running, i am trying to view this page apiLogs.php (source code is below)

  define('DB_HOST', 'localhost');
    define('DB_USER', 'user2');
    define('DB_PASS', 'user2');

     mysql_connect(DB_HOST, DB_USER, DB_PASS);
     mysql_select_db('smsApp');

$30m_ago = new DateTime("30 minutes ago");
$s = $30m_ago->format("Y-m-d H:i:s");
$result = mysql_query("SELECT count(*) from messageRequest where created_at > '$s'");

$response['last_30_min_sms_count'] = current(mysql_fetch_row($result));

echo json_encode($response)."\n";
7
  • And there is 1 million record already in the messageRequest table. Commented Oct 2, 2016 at 11:15
  • The only thing you can do is optimize your code, or upgrade the hardware on which it's run. Commented Oct 2, 2016 at 12:09
  • @Thomas can u give a example for optimizing the code as in what aspect? Commented Oct 2, 2016 at 14:51
  • not without the code, but 1000 database queries are still a lot. Commented Oct 2, 2016 at 14:55
  • @Thomas its not 1000 database queries, its API calls. check the updated question please with source code.its very simple code. only a simple db insert. Commented Oct 2, 2016 at 15:13

1 Answer 1

3

Use InnoDB, not MyISAM. That way, the table won't be locked.

Don't use the deprecated mysql_* API, use mysqli_* or PDO.

"Don't queue it, just do it." That is, it may be significantly faster to simply perform the task rather than going through a queue.

The code snippet is subject to "SQL Injection".

Do have INDEX(created_at). That SELECT can run entirely in the index.

Instead of the DateTime package, you could simply say SELECT ... WHERE created_at > NOW() - INTERVAL 30 MINUTE.

More

With InnoDB, I suggest innodb_flush_log_at_trx_commit = 2 to cut back significantly on the I/O caused by 1000 separate INSERTs per second.

Sign up to request clarification or add additional context in comments.

6 Comments

I am already using innoDB and i have improved the code to use PDO but still when 1000 request per second and during that time when i view the apiLogs.php its very slow
Do you have the index? For a million-row table, that will be very important for the SELECT. Without it, the entire million rows will be scanned every time. This chews up CPU cycles.
@Rici yes, there is index for created_at
I would build the Summary table with a granularity of hour or day. Then you can roll up into weeks or months. Having only a single count is unusual, but OK. Some notes on Summary Tables .
Also, see "More" in my answer.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.