Edit model card

FalconLite2 Model

FalconLit2 is a fine-tuned and quantized Falcon 40B language model, capable of processing long (up to 24K tokens) input sequences. By utilizing 4-bit GPTQ quantization and adapted RotaryEmbedding, FalconLite2 is able to process 10x longer contexts while consuming 4x less GPU memory than the original model. FalconLite2 is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite2 can be deployed on a single AWS g5.12x instance with TGI 1.0.3 and TGI 1.1.0, making it suitable for applications that require high performance in resource-constrained environments. You can also deploy FalconLite2 directly on SageMaker endpoints.

FalconLite2 evolves from FalconLite, and their similarities and differences are summarized below:

Model Fine-tuned on long contexts Quantization Max context length RotaryEmbedding adaptation Inference framework
FalconLite No 4-bit GPTQ 12K dNTK TGI 0.9.2
FalconLite2 Yes 4-bit GPTQ 24K rope_theta = 1000000 TGI 1.0.3 & 1.1.0

Model Details

Deploy FalconLite2 on EC2

SSH login to an AWS g5.12x instance with the Deep Learning AMI.

Start TGI server-1.0.3

git clone https://github.com/awslabs/extending-the-context-length-of-open-source-llms.git falconlite-dev
cd falconlite-dev/falconlite2
# this may take a while to build updated vLLM CUDA kernels
./docker_build.sh
./start_falconlite.sh

Start TGI server-1.1.0

git clone https://github.com/awslabs/extending-the-context-length-of-open-source-llms.git falconlite-dev
cd falconlite-dev/falconlite2-tgi1.1.0
# this may take a while to build updated vLLM CUDA kernels
./docker_build_rebuild_vllm_rope-theta.sh
./start_falconlite.sh

Perform inference

# after FalconLite has been completely started
pip install -r ../script/requirements-client.txt

# test short context
python falconlite_client.py

# test long context of 13400 tokens, 
# which are copied from [Amazon Aurora FAQs](https://aws.amazon.com/rds/aurora/faqs/)
python falconlite_client.py -l

Important - Use the prompt template below for FalconLite2:

<|prompter|>What are the main challenges to support a long context for LLM?<|endoftext|><|assistant|>

Important - When using FalconLite2 for inference for the first time, it may require a brief 'warm-up' period that can take 10s of seconds. However, subsequent inferences should be faster and return results in a more timely manner. This warm-up period is normal and should not affect the overall performance of the system once the initialisation period has been completed.

Deploy FalconLite2 on Amazon SageMaker

To deploy FalconLite2 on a SageMaker endpoint with TGI-1.0.3, please follow this notebook running on a SageMaker Notebook instance (e.g. g5.xlarge).

To deploy FalconLite2 on a SageMaker endpoint with TGI-1.1.0, please follow this notebook running on a SageMaker Notebook instance (e.g. g5.xlarge).

Evalution Result

We evaluated FalconLite2 against benchmarks that are specifically designed to assess the capabilities of LLMs in handling longer contexts.

Accuracy

Eval task Input length Input length Input length Input length Input length
2851 5568 8313 11044 13780
Topic Retrieval 100% 100% 100% 100% 90%
Eval task Input length Input length Input length Input length Input length Input length
3818 5661 7505 9354 11188 12657
Line Retrieval 84% 82% 66% 56% 62% 34%
Eval task Input length Input length Input length Input length
3264 5396 8329 10197
Pass key Retrieval 100% 100% 100% 100%
Eval task Test set Accuracy Hard subset Accuracy
Question Answering with Long Input Texts 53.4% 45.4%

Limitations

Before using the FalconLite model, it is important to perform your own independent assessment, and take measures to ensure that your use would comply with your own specific quality control practices and standards, and that your use would comply with the local rules, laws, regulations, licenses and terms that apply to you, and your content.

Downloads last month
29
Inference Examples
Inference API (serverless) has been turned off for this model.