All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Data Fields | Related Functions
as_policy_batch Struct Reference

Detailed Description

Batch Policy

Definition at line 593 of file as_policy.h.

#include "as_policy.h"

+ Collaboration diagram for as_policy_batch:

Data Fields

bool allow_inline
 
bool concurrent
 
uint32_t timeout
 
bool use_batch_direct
 

Related Functions

(Note that these are not member functions.)

static void as_policy_batch_copy (as_policy_batch *src, as_policy_batch *trg)
 
static as_policy_batchas_policy_batch_init (as_policy_batch *p)
 

Friends And Related Function Documentation

static void as_policy_batch_copy ( as_policy_batch src,
as_policy_batch trg 
)
related

Copy as_policy_batch values.

Parameters
srcThe source policy.
trgThe target policy.

Definition at line 1051 of file as_policy.h.

References allow_inline, concurrent, timeout, and use_batch_direct.

static as_policy_batch * as_policy_batch_init ( as_policy_batch p)
related

Initialize as_policy_batch to default values.

Parameters
pThe policy to initialize.
Returns
The initialized policy.

Definition at line 1033 of file as_policy.h.

References allow_inline, AS_POLICY_TIMEOUT_DEFAULT, concurrent, timeout, and use_batch_direct.

Field Documentation

bool as_policy_batch::allow_inline

Allow batch to be processed immediately in the server's receiving thread when the server deems it to be appropriate. If false, the batch will always be processed in separate transaction threads. This field is only relevant for the new batch index protocol.

For batch exists or batch reads of smaller sized records (<= 1K per record), inline processing will be significantly faster on "in memory" namespaces. The server disables inline processing on disk based namespaces regardless of this policy field.

Inline processing can introduce the possibility of unfairness because the server can process the entire batch before moving onto the next command. Default: true

Definition at line 649 of file as_policy.h.

bool as_policy_batch::concurrent

Determine if batch commands to each server are run in parallel threads.

Values:

  • false: Issue batch commands sequentially. This mode has a performance advantage for small to medium sized batch sizes because commands can be issued in the main transaction thread. This is the default.
  • true: Issue batch commands in parallel threads. This mode has a performance advantage for large batch sizes because each node can process the command immediately. The downside is extra threads will need to be created (or taken from a thread pool).

Definition at line 619 of file as_policy.h.

uint32_t as_policy_batch::timeout

Maximum time in milliseconds to wait for the operation to complete.

Definition at line 599 of file as_policy.h.

bool as_policy_batch::use_batch_direct

Use old batch direct protocol where batch reads are handled by direct low-level batch server database routines. The batch direct protocol can be faster when there is a single namespace, but there is one important drawback. The batch direct protocol will not proxy to a different server node when the mapped node has migrated a record to another node (resulting in not found record).

This can happen after a node has been added/removed from the cluster and there is a lag between records being migrated and client partition map update (once per second).

The new batch index protocol will perform this record proxy when necessary. Default: false (use new batch index protocol if server supports it)

Definition at line 634 of file as_policy.h.


The documentation for this struct was generated from the following file: