Saturday, May 28, 2016

Do not forget to set -XX:MaxDirectMemorySize for embedded databases


Since 2.2 version of OrientDB, we have migrated from the usage of sun.misc.Unsafe for direct memory management to the usage of direct ByteBuffer-s. But there is a small side effect which affects all users of embedded databases.  The amount of direct memory which can be allocated in total, inside of all ByteBuffer-s, limited either to the maximum allowed heap size or to size specified in -XX:MaxDirectMemorySize parameter.

What would it be if you had not set this parameter for embedded database ? Well, nothing dramatical, OrientDB will detect the absence of this parameter and decrease the size of disk cache. As the result, all database operation would be much slower and overall system performance would suffer.

If there are no any specific requirements for your application we recommend to set this parameter to 512g for 64-bit JVMs and to 2g for 32-bit JVMs as we do inside of server startup script.

The setting of this parameter to 512g does not mean that OrientDB will consume all 512GB memory. The size of direct memory consumed by OrientDB is limited by the size of disc cache, more details on this topic may be found there.

Monday, May 2, 2016

How to calculate maximum amount of direct memory consumed by OrientDB


Many users ask how much of memory will be consumed by OrientDB and which settings affects this number. This question becomes even more actual since 2.2 release. In the new release, we allocate memory by big chunks (about of 1Gb size each) and then split it between threads on demand.

So how to calculate the maximum amount of memory which will be consumed by OrientDB ? That is simple. OrientDB uses both memory heap and direct memory. Direct memory is used for the disk cache.

Memory consumed by disk cache may be calculated looking on the value of following configuration parameter storage.diskCache.bufferSize which shows the maximum amount of memory consumed by disk cache in megabytes.
We need to increase this value till it will not be divided to the 1GB without a reminder (size of a chunk of memory allocated by the pool at once).

The rest is easy. Add value which you calculated above and add the amount of memory consumed by memory heap and you will get the maximum amount of memory which will be consumed by OrientDB.

P.S.1 Maximum size of memory chunk in bytes which will be allocated at once by OrientDB may be set using property memory.chunk.size .

P.S.2: You may find actual amount of direct memory consumed by OrientDB accessing properties allocatedMemory
allocatedMemoryInMB allocatedMemoryInGB of com.orientechnologies.common.directmemory:type=OByteBufferPoolMXBean MBean.

How to change disk cache size at runtime in OrientDB


Since 2.1.16 and 2.2 versions of  OrientDB, it will be possible to change the size of direct memory disk cache at runtime.

That is quite simple to do, merely call com.orientechnologies.orient.core.config.OGlobalConfiguration#DISK_CACHE_SIZE.setValue() at any place in your program and size of disk cache will be changed (passed in value is cache size in megabytes). You may do this inside of your code or using OrientDB console or Studio.

There are couple of nuances of usage of given feature:
  1. If you increase cache size that is pretty cheap operation.
  2. If you decrease cache size it may take some time because previously loaded pages have to be freed.
  3. If you decrease cache size you may get (but the probability that it may happen is really low) IllegalStateException which tells you that cache does not have enough memory to keep pinned pages inside of RAM
Pinned pages is a special area of disk cache which unloaded from RAM only when OrientDB storage is closed. It is used in OrientDB clusters and hash index to speed up a performance of storage operations.

Never use long living database instances


Every user who works with OrientDB knows that to manipulate records he or she has to create ODatabaseDocumetTX an instance first. But there is a wide misunderstanding about the meaning of this object. Many users think that this object may be treated as an abstraction of data which are stored on disk and placed on a server. As a result, they create a single instance of this object per thread, assign it to the class field and use the same object during whole application life. That is “not performance” wise decision. There are several reasons why you should consider using short living database instance acquired from the pool for example from OPartitionedDatabasePool instead.

The most important reason is that during loading of records we put records into the local cache which is based on WeakHashMap. We need this cache for 2 reasons:
  1. To avoid OConcurrentModificationException in the case of loading of the same record in a call stack.
  2. To speed up graph traversal.
Let's look how WeakHashMap works (in OpenJDK at least):
  1. Every put/get/remove method of this HashMap calls getTable() method.
  2. Which in turn calls the expungeStaleEntries() .
The responsibility of the last method is to remove weak references which are already not accessible. expungeStaleEntries() use ReferenceQueue which was passed during the creation of WeakReference to detect unreachable references.

The main problem there is mechanics which is used to fill in and pull items from reference queue. When weak reference is becoming a subject of garbage collection special thread which has high priority java.lang.ref.Reference.ReferenceHandler put such reference into reference queue. But reference queue itself is guarded by object wide lock !

Let’s put all of this together:
  1. You have long living ODatabaseDocumetTX instance.
  2. You use it to load many short living record objects.
  3. You have WeakHashMap polluted by many WeakReferences.
  4. After next GC run ReferenceHandler starts to pull and lock ReferenceQueue and as result lock WeakHashMap objects.
  5. Your threads become frozen for a long time. We know situations when threads were frozen for several seconds !
So the main rule of thumb - never use long living ODatabaseDocumetTX instances, use database pool instead. Despite solving the problem described above pool also provides support for nested transactions.

P.S. In 3.0 version we are going to implement WeakHashMap which will be based on Hopscotch hashing algorithm and will not suffer from synchronization problem described above.

Sunday, May 1, 2016

OrientDB incremental backup API


Since 2.2 version we support an incremental backup feature. So what perks do you get if you use incremental backup instead of classical backup:
  1. The database stores on disk only data which was changed after the previous backup.
  2. The database is not switched to read-only mode and users may continue to work with it.
Let's look how does it work inside of OrientDB.

Every time when we change data, together with user data we put some kind of timestamp which will show us when this change happened, we call it LSN. Despite normal timestamp, every time, when we make an update, LSN will continuously grow and can not be equal to previous LSN. In each snippet of incremental backup ( the file which is created during single incremental backup operation), we put maximum LSN of changes which we added into this snippet. So merge of different snippets and finding of changes happened after the previous backup is kinda simple. We iterate over all data in a sequential way and compare latest stored LSN and LSN of processed change if the last one is bigger we put it in the new backup file.

To prevent the situation when we lose changes happened during iteration over all database data, we log all operation since the start of an incremental backup process into database journal and at the end of the backup process we append a list of those operations to the backup file.

During restore process we import all data added into backup files and apply operations from database journal, so at the end of the process, we get the database in the state which it had at the end of last incremental backup.

To perform an incremental backup from Java you can call the method ODatabase#incrementalBackup(path-to-incremental-backup-directory) or from the console the same will look like backup database path-to-incremental-backup-directory -incremental.

To create database from incremental backup you can call from Java ODatabase#create(path-to-incremental-backup-directory) or from console: create database root root plocal graph -restore=path-to-incremental-backup-directory

Please note that incremental backup feature is provided only with the enterprise edition of OrientDB.