top button
Flag Notify
    Connect to us
      Facebook Login
      Site Registration Why to Join

Facebook Login
Site Registration

Python: Who owns the memory in ctypes?

+2 votes

I have a pile of C code that I wrote that I want to interface to via the ctypes module ( ).

The C code uses the Boehm-Demers-Weiser garbage collector ( ) for all of its memory management. What I want to know is, who owns allocated memory? That is, if my C code allocates memory via GC_MALLOC() (the standard call for allocating memory in the garbage collector), and I access some object via ctypes in python, will the python garbage collector assume that it owns it and attempt to dispose of it when it goes out of scope?

Ideally, the memory is owned by the side that created it, with the other side simply referencing it, but I want to be sure before I invest a lot of time interfacing the two sides together.

posted Nov 15, 2016 by anonymous

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

1 Answer

0 votes

ctypes objects own only the memory that they allocate. Inspect the _b_needsfree_ attribute to determine whether a ctypes object owns the referenced memory.

For example:

This array object owns a 12-byte buffer for the array elements:

 >>> arr = (ctypes.c_uint * 3)(0, 1, 2)
 >>> arr._b_needsfree_

This pointer object owns an 8-byte buffer for the 64-bit target address:

 >>> p = ctypes.POINTER(ctypes.c_uint * 3)(arr)
 >>> p._b_needsfree_

The following new array object created by dereferencing the pointer does not own the 12-byte buffer:

 >>> ref = p[0]
 >>> ref[:]
 [0, 1, 2]
 >>> ref._b_needsfree_

However, it does have a reference chain back to the original array:

 >>> ref._b_base_ is p
 >>> p._objects['1'] is arr

On the other hand, if you use the from_address() class method, the resulting object is a dangling reference to memory that it doesn't own and for which there's no supporting reference chain to the owning object.

For example:

 >>> arr = (ctypes.c_uint * 2**24)()
 >>> arr[-1] = 42
 >>> ref = type(arr).from_address(ctypes.addressof(arr))
 >>> ref[-1]

 >>> ref._b_base_ is None
 >>> ref._objects is None

2**24 bytes is a big allocation that uses mmap (Unix) instead of the heap. Thus accessing ref[-1] causes a segfault after arr is

 >>> del arr
 >>> ref[-1]
 Segmentation fault (core dumped)
answer Nov 15, 2016 by Abhay
Similar Questions
0 votes

Im required to import ha certain dll called 'NHunspell.dll' which is used for Spell Checking purposes. I am using Python for the software. Although I checked out several websites to properly use ctypes, I have been unable to load the dll properly.

When I use this code.

 from ctypes import *
 hunspell = cdll.LoadLibrary['Hunspellx64.dll']

I get an error

 hunspell = cdll.LoadLibrary['Hunspellx64.dll']
 TypeError: 'instancemethod' object has no attribute '__getitem__'

I guess it might a problem with the structure of the dll. But I have no idea how to import the dll properly.

0 votes

Is it possible to call a Python macro from ctypes? For example, Python 3.3 introduces some new macros for querying the internal representation of strings:

So I try this in 3.3:

py> import ctypes
py> ctypes.pythonapi.PyUnicode_MAX_CHAR_VALUE
Traceback (most recent call last):
 File "", line 1, in 
 File "/usr/local/lib/python3.3/ctypes/", line 366, in __getattr__
 func = self.__getitem__(name)
 File "/usr/local/lib/python3.3/ctypes/", line 371, in __getitem__
 func = self._FuncPtr((name_or_ordinal, self))
AttributeError: python3.3: undefined symbol: PyUnicode_MAX_CHAR_VALUE
0 votes

I've written a program using Twisted that uses SqlAlchemy to access a database using threads.deferToThread(...) and SqlAlchemy's scoped_session(...). This program runs for a long time, but leaks memory slowly to the point of needing to be restarted. I don't know that the SqlAlchemy/threads thing is the problem, but thought I'd make you aware of it.

Anyway, my real question is how to go about debugging memory leak problems in Python, particularly for a long running server process written with Twisted. I'm not sure how to use heapy or guppy, and objgraph doesn't tell me enough to locate the problem. If anyone as any suggestions or pointers it would be very much appreciated!

0 votes

Previously, we found that our python scripts consume too much memory. So I use python's resource module to restrict RLIMIT_AS's soft limit and hard limit to 200M.
On my RHEL5.3, it works OK. But on CentOS 6.2 + python2.6.6, it reports memory error(exceeding 200M). And I tested with a very small script, and result is out of my expect, it still use too much memory on my CentOS 6.2 python:
I could understand that 64 bit machines will occupy more virtual memory than that on 32 bit, because the length of some types are not the same. But I don't know why they differs so greatly(6M to 180M), Or is this only caused by that python2.6 on CentOS 6.2's memory allocation is different from python's default one? Could you kindly give me some clues?

Contact Us
+91 9880187415
#280, 3rd floor, 5th Main
6th Sector, HSR Layout
Karnataka INDIA.