-->

Python: How to use a generator to avoid sql memory

2020-05-28 11:10发布

问题:

I have following method that access mysql database and the query get executed in a server that I don't have any access to change anything on regarding increasing memory. I am new to generators and started to read more about it and thought I could convert this to be use generator.

def getUNames(self):
    globalUserQuery = ur'''SELECT gu_name FROM globaluser WHERE gu_locked = 0'''
    global_user_list = []
    try:
        self.gdbCursor.execute(globalUserQuery)
        rows = self.gdbCursor.fetchall()
        for row in rows:
            uName = unicode(row['gu_name'], 'utf-8')
            global_user_list.append(uName)
        return global_user_list
    except Exception, e:
        traceback.print_exc()

And I use this code as follow:

for user_name in getUNames():
...

This is the error that I was getting from server side:

^GOut of memory (Needed 725528 bytes)
Traceback (most recent call last):
...
packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
    raise errorclass, errorvalue
OperationalError: (2008, 'MySQL client ran out of memory')

How should I be using generator to avoid this:

while true:
   self.gdbCursor.execute(globalUserQuery)
   row = self.gdbCursor.fetchone()
   if row is None: break
   yield row

Not sure if the above is the right way to go since I am expecting a list as a result of my database method. I think what would be great is a get chunk from the query and return a list and once that get done generator would give the next set as long as query return results.

回答1:

With MySQLdb, the default cursor loads the entire result set into a Python list when the call to cursor.execute(..) is made. For a large query that may cause a MemoryError whether or not you use a generator.

Instead, use an SSCursor or SSDictCursor. These will keep the result set on the server side, and allow you to interate through the items in the result set on the client side:

import MySQLdb  
import MySQLdb.cursors as cursors
import traceback

def getUNames(self):
    # You may of course want to define `self.gdbCursor` somewhere else...
    conn = MySQLdb.connect(..., cursorclass=cursors.SSCursor)
    #                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    #                       Set the cursor class to SSCursor here
    self.gdbCursor = conn.cursor()

    globalUserQuery = ur'''SELECT gu_name FROM globaluser WHERE gu_locked = 0'''
    try:
        self.gdbCursor.execute(globalUserQuery)
        for row in self.gdbCursor:
            uName = unicode(row['gu_name'], 'utf-8')
            yield uName
    except Exception as e:
        traceback.print_exc()

There isn't much documentation on the difference between the default Cursor and the SSCursor. The best source I know is the docstrings of the Cursor Mixin classes themselves:

The default cursor uses a CursorStoreResultMixIn:

In [2]: import MySQLdb.cursors as cursors
In [8]: print(cursors.CursorStoreResultMixIn.__doc__)
This is a MixIn class which causes the entire result set to be
    stored on the client side, i.e. it uses mysql_store_result(). If the
    result set can be very large, consider adding a LIMIT clause to your
    query, or using CursorUseResultMixIn instead.

and the SSCursor uses a CursorUseResultMixIn:

In [9]: print(cursors.CursorUseResultMixIn.__doc__)
This is a MixIn class which causes the result set to be stored
    in the server and sent row-by-row to client side, i.e. it uses
    mysql_use_result(). You MUST retrieve the entire result set and
    close() the cursor before additional queries can be peformed on
    the connection.

Since I changed getUNames into a generator, it would be used like this:

for row in self.getUnames():
    ...