Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.codec.prefixtree | |
org.apache.hadoop.hbase.io.compress | |
org.apache.hadoop.hbase.io.encoding | |
org.apache.hadoop.hbase.io.hfile |
Provides the hbase data+index+metadata file.
|
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions | |
org.apache.hadoop.hbase.util |
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
HColumnDescriptor.getCompactionCompression() |
Compression.Algorithm |
HColumnDescriptor.getCompactionCompressionType() |
Compression.Algorithm |
HColumnDescriptor.getCompression() |
Compression.Algorithm |
HColumnDescriptor.getCompressionType() |
Modifier and Type | Method and Description |
---|---|
HColumnDescriptor |
HColumnDescriptor.setCompactionCompressionType(Compression.Algorithm type)
Compression types supported in hbase.
|
HColumnDescriptor |
HColumnDescriptor.setCompressionType(Compression.Algorithm type)
Compression types supported in hbase.
|
Modifier and Type | Method and Description |
---|---|
HFileBlockDecodingContext |
PrefixTreeCodec.newDataBlockDecodingContext(Compression.Algorithm compressionAlgorithm) |
HFileBlockEncodingContext |
PrefixTreeCodec.newDataBlockEncodingContext(Compression.Algorithm compressionAlgorithm,
DataBlockEncoding encoding,
byte[] header) |
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
Compression.getCompressionAlgorithmByName(String compressName) |
static Compression.Algorithm |
Compression.Algorithm.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static Compression.Algorithm[] |
Compression.Algorithm.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
Modifier and Type | Method and Description |
---|---|
static void |
Compression.decompress(byte[] dest,
int destOffset,
InputStream bufferedBoundedStream,
int compressedSize,
int uncompressedSize,
Compression.Algorithm compressAlgo)
Decompresses data from the given stream using the configured compression
algorithm.
|
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
HFileBlockEncodingContext.getCompression() |
Compression.Algorithm |
HFileBlockDefaultDecodingContext.getCompression() |
Compression.Algorithm |
HFileBlockDefaultEncodingContext.getCompression() |
Compression.Algorithm |
HFileBlockDecodingContext.getCompression() |
Modifier and Type | Method and Description |
---|---|
static int |
EncodedDataBlock.getCompressedSize(Compression.Algorithm algo,
org.apache.hadoop.io.compress.Compressor compressor,
byte[] inputBuffer,
int offset,
int length)
Find the size of compressed data assuming that buffer will be compressed
using given algorithm.
|
int |
EncodedDataBlock.getEncodedCompressedSize(Compression.Algorithm comprAlgo,
org.apache.hadoop.io.compress.Compressor compressor)
Estimate size after second stage of compression (e.g.
|
HFileBlockDecodingContext |
DataBlockEncoder.newDataBlockDecodingContext(Compression.Algorithm compressionAlgorithm)
Creates an encoder specific decoding context, which will prepare the data
before actual decoding
|
HFileBlockEncodingContext |
DataBlockEncoder.newDataBlockEncodingContext(Compression.Algorithm compressionAlgorithm,
DataBlockEncoding encoding,
byte[] headerBytes)
Creates a encoder specific encoding context
|
Constructor and Description |
---|
HFileBlockDefaultDecodingContext(Compression.Algorithm compressAlgo) |
HFileBlockDefaultEncodingContext(Compression.Algorithm compressionAlgorithm,
DataBlockEncoding encoding,
byte[] headerBytes) |
Modifier and Type | Field and Description |
---|---|
protected Compression.Algorithm |
AbstractHFileWriter.compressAlgo
The compression algorithm used.
|
protected Compression.Algorithm |
AbstractHFileReader.compressAlgo
Filled when we read in the trailer.
|
protected Compression.Algorithm |
HFile.WriterFactory.compression |
static Compression.Algorithm |
HFile.DEFAULT_COMPRESSION_ALGORITHM
Default compression: none.
|
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
AbstractHFileWriter.compressionByName(String algoName) |
Compression.Algorithm |
HFile.Reader.getCompressionAlgorithm() |
Compression.Algorithm |
AbstractHFileReader.getCompressionAlgorithm() |
Compression.Algorithm |
FixedFileTrailer.getCompressionCodec() |
Modifier and Type | Method and Description |
---|---|
protected abstract HFile.Writer |
HFile.WriterFactory.createWriter(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.FSDataOutputStream ostream,
int blockSize,
Compression.Algorithm compress,
HFileDataBlockEncoder dataBlockEncoder,
KeyValue.KVComparator comparator,
ChecksumType checksumType,
int bytesPerChecksum,
boolean includeMVCCReadpoint) |
HFileBlockDecodingContext |
NoOpDataBlockEncoder.newOnDiskDataBlockDecodingContext(Compression.Algorithm compressionAlgorithm) |
HFileBlockDecodingContext |
HFileDataBlockEncoderImpl.newOnDiskDataBlockDecodingContext(Compression.Algorithm compressionAlgorithm) |
HFileBlockDecodingContext |
HFileDataBlockEncoder.newOnDiskDataBlockDecodingContext(Compression.Algorithm compressionAlgorithm)
create a encoder specific decoding context for reading.
|
HFileBlockEncodingContext |
NoOpDataBlockEncoder.newOnDiskDataBlockEncodingContext(Compression.Algorithm compressionAlgorithm,
byte[] dummyHeader) |
HFileBlockEncodingContext |
HFileDataBlockEncoderImpl.newOnDiskDataBlockEncodingContext(Compression.Algorithm compressionAlgorithm,
byte[] dummyHeader) |
HFileBlockEncodingContext |
HFileDataBlockEncoder.newOnDiskDataBlockEncodingContext(Compression.Algorithm compressionAlgorithm,
byte[] headerBytes)
Create an encoder specific encoding context object for writing.
|
void |
FixedFileTrailer.setCompressionCodec(Compression.Algorithm compressionCodec) |
HFile.WriterFactory |
HFile.WriterFactory.withCompression(Compression.Algorithm compression) |
Constructor and Description |
---|
AbstractHFileWriter(CacheConfig cacheConf,
org.apache.hadoop.fs.FSDataOutputStream outputStream,
org.apache.hadoop.fs.Path path,
int blockSize,
Compression.Algorithm compressAlgo,
HFileDataBlockEncoder dataBlockEncoder,
KeyValue.KVComparator comparator) |
HFileBlock.Writer(Compression.Algorithm compressionAlgorithm,
HFileDataBlockEncoder dataBlockEncoder,
boolean includesMemstoreTS,
ChecksumType checksumType,
int bytesPerChecksum) |
HFileWriterV2(org.apache.hadoop.conf.Configuration conf,
CacheConfig cacheConf,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.FSDataOutputStream ostream,
int blockSize,
Compression.Algorithm compressAlgo,
HFileDataBlockEncoder blockEncoder,
KeyValue.KVComparator comparator,
ChecksumType checksumType,
int bytesPerChecksum,
boolean includeMVCCReadpoint)
Constructor that takes a path, creates and closes the output stream.
|
Modifier and Type | Method and Description |
---|---|
StoreFile.Writer |
Store.createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint) |
StoreFile.Writer |
HStore.createWriterInTmp(long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
boolean includeMVCCReadpoint) |
StoreFile.WriterBuilder |
StoreFile.WriterBuilder.withCompression(Compression.Algorithm compressAlgo) |
Modifier and Type | Field and Description |
---|---|
protected Compression.Algorithm |
Compactor.compactionCompression |
Modifier and Type | Method and Description |
---|---|
static void |
CompressionTest.testCompression(Compression.Algorithm algo) |
Copyright © 2013 The Apache Software Foundation. All rights reserved.