by lars hofhansl modern cpu cores can execute hundreds of instructions in the time it takes to reload the l1 cache. "ram is the new disk" as a coworker at salesforce likes to say. the l1-cache is the new ram i might add. as we add more andby lars hofhansl
modern cpu cores can execute hundreds of instructions in the time it takes to reload the l1 cache. "ram is the new disk" as a coworker at salesforce likes to say. the l1-cache is the new ram i might add.
as we add more and more cpu cores, we can easily be memory io bound unless we are a careful.
many common problems i have seen over the years were related to:
aside from safety and liveliness considerations, a typical problem is too much synchronization limiting potential parallel execution.
memory barriers are required in java by the following language constructs:
- synchronized - sets read and write barriers as needed (details depend on jvm, version, and settings)
- volatile - sets a read barrier before a read to a volatile, and write barrier after a write
- final - set a write barrier after the assignment
- atomicinteger, atomiclong, etc - uses volatiles and hardware cas instructions
memory copying is often seen in java for example because of the lack of in-array pointers, or really just general unawareness and the expectation that "garbage collector will clean up the mess." well, it does, but not without a price.
like any software project of reasonable size, hbase has problems of all the above categories.
profiling in java has become extremely convenient. just start jvisualvm which ships with the sunoraclejdk, pick the process to profile (in my case a local hbase regionserver) and start profiling.
over the past few weeks i did some on and off profiling in hbase, which lead to the following issues:
hbase-6603 - regionmetricsstorage.incrnumericmetric is called too oftenironically here it was the collection of a performance metric that caused a measurable slowdown of up 15%(!) for very wide rows (> 10k columns).
the metric was maintained as an atomiclong, which introduced a memory barrier in one of the hottest code paths in hbase.
the good folks at facebook have found the same issue at roughly the same time. (it turns that they were also... uhm... the folks who introduced the problem.)
hbase-6621 - reduce calls to bytes.tointa keyvalue (the data structure that represents "columns" in hbase) is currently backed by a single byte. the sizes of the various parts are encoded in this byte and have to read and decoded; each time an extra memory access. in many cases that can be avoided, leading to slight performance improvement.
hbase-6711 - avoid local results copy in storescannerall references pertaining to a single row (i.e. keyvalue with the same row key) were copied at the storescanner layer. removing this lead to another slight performance increase with wide rows.
hbase-7180 - regionscannerimpl.next() is inefficientthis introduces a mechanism for coprocessors to access regionscanners at a lower level, thus allowing skipping of a lot of unnecessary setup for each next() call. in tight loops a coprocessor can make use of this new api to save another 10-15%.
hbase-7279 - avoid copying the rowkey in regionscanner, storescanner, and scanquerymatcherthe row key of keyvalue was copied in the various scan related classes. to reduce that effect the row key was previously cached in the keyvalue class - leading to extra memory required for each keyvalue.
this change avoids all copying and hence also obviates the need for caching the row key.
a keyvalue now is hardly more than an array pointer (a byte, an offset, and a length), and no data is copied any longer all the way from the block loaded from disk or cache to the rpc layer (unless the keyvalues are optionally encoded on disk, in which case they still need to be decoded in memory - we're working on improving that too).
previously the size of a keyvalue on the scan path was at least 116 bytes + the length of the rowkey (which can be arbitrarily long). now it is ~60 bytes, flat and including its own reference.
(remember during a course of a large scan we might be creating millions or even billions of keyvalue objects)
this is nice improvement both in term of scan performance (15-20% for small row keys of few bytes, much more for large ones) and in terms of produced garbage.
since all copying is avoided, scanning now scales almost linearly with the number of cores.
hbase-6852 - schemametrics.updateoncachehit costs too much while full scanning a table with all of its fieldsother folks have been busy too. here cheng hao found another problem with a scan related metric that caused a noticeable slowdown (even though i did not believe him first).
this removed another set of unnecessary memory barriers.
hbase-7336 - hfileblock.readatoffset does not work well with multiple threadsthis is slightly different issue caused by bad synchronization of the fsreader associated with a storefile. there is only a single reader per storefile. so if the file's blocks are not cached - possibly because the scan indicated that it wants no caching, because it expects to touch too many blocks - the scanner threads are now competing for read access to the store file. that lead to outright terrible performance, such a scanners timing out even with just two scanners accessing the same file in tight loop.
this patch is a stop gap measure: attempt to acquire the lock on the reader, if that failed switch to hdfs positional reads, which can read at an offset without affecting the state of the stream, and hence requires no locking.
summarytogether these various changes can lead to ~40-50% scan performance improvement when using a single core. even more when using multiple cores on the same machines (as is the case with hbase)
an entirely unscientific benchmark20m rows, with two column families just a few dozen bytes each.
i performed two tests:
1. a scan that returns rows to the client
2. a scan that touches all rows via a filter but does not return anything to the client.
(this is useful to gauge the actual server side performance).
further i tested with (1) no caching, all reads from disk (2) all data in the os cache and (3) all data in hbase's block cache.
i compared 0.94.0 against the current 0.94 branch (what i will soon release as 0.94.4).
- scanning with scanner caching set to 10000:
no data in cache: 54s
data in os cache: 51s
data in block cache: 35s
no data in cache: 50s (io bound between disk and network)
data in os cache: 43s
data in block cache: 32s
(limiting factor was shipping the results to the client)
- all data filtered at the server (with a singlevaluecolumnfilter that does not match anything, so each rows is still scanned)
no data in cache: 31s
data in os cache: 25s
data in block cache: 11s
no data in cache: 22s
data in os cache: 17s
cache in block cache: 6.3s
so as you can see scan performance has significantly improved since 0.94.0.
salesforce just hired some performance engineers from a well known chip manufacturer, and i plan to get some of their time to analyze hbase in even more details, to track down memory stalls, etc.