direct ByteBuffer performance note
via https://gist.github.com/colinrgodsey/9bc606d09d035ba2334c we discovered it was always faster to copy to a direct buffer when dealing with NIO. No concessions need be made for wrapping in-heap data.
This commit is contained in:
parent
8aa610a2aa
commit
9bfb0fed27
1 changed files with 6 additions and 2 deletions
|
|
@ -19,8 +19,12 @@ trait BufferPool {
|
|||
* A buffer pool which keeps a free list of direct buffers of a specified default
|
||||
* size in a simple fixed size stack.
|
||||
*
|
||||
* If the stack is full a buffer offered back is not kept but will be let for
|
||||
* being freed by normal garbage collection.
|
||||
* If the stack is full the buffer is de-referenced and available to be
|
||||
* freed by normal garbage collection.
|
||||
*
|
||||
* Using a direct ByteBuffer when dealing with NIO operations has been proven
|
||||
* to be faster than wrapping on-heap Arrays. There is ultimately no performance
|
||||
* benefit to wrapping in-heap JVM data when writing with NIO.
|
||||
*/
|
||||
private[akka] class DirectByteBufferPool(defaultBufferSize: Int, maxPoolEntries: Int) extends BufferPool {
|
||||
private[this] val locked = new AtomicBoolean(false)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue