vllm.attention.ops.vit_attn_wrappers ¶
This file contains ops for ViT attention to be compatible with torch.compile as there are operations here not supported by torch.compile (for instance, .item() in flash attention)
Using these ops and wrapping vision blocks with torch.compile can speed up throughput in vision models by ~5% relative on H100, and improve token latencies by ~7% (see qwen2_5_vl for example usage)
To use these ops, you must have a recent version of PyTorch installed (>= 2.4.0)
flash_attn_maxseqlen_wrapper ¶
flash_attn_maxseqlen_wrapper(
q: Tensor,
k: Tensor,
v: Tensor,
cu_seqlens: Tensor,
max_seqlen: Tensor,
batch_size: int,
is_rocm_aiter: bool,
use_upstream_fa: bool,
) -> Tensor
Source code in vllm/attention/ops/vit_attn_wrappers.py
flash_attn_maxseqlen_wrapper_fake ¶
flash_attn_maxseqlen_wrapper_fake(
q: Tensor,
k: Tensor,
v: Tensor,
cu_seqlens: Tensor,
max_seqlen: Tensor,
batch_size: int,
is_rocm_aiter: bool,
use_upstream_fa: bool,
) -> Tensor
Source code in vllm/attention/ops/vit_attn_wrappers.py
torch_sdpa_wrapper ¶
Source code in vllm/attention/ops/vit_attn_wrappers.py
torch_sdpa_wrapper_fake ¶
vit_flash_attn_wrapper ¶
vit_flash_attn_wrapper(
q: Tensor,
k: Tensor,
v: Tensor,
cu_seqlens: Tensor,
max_seqlen: Tensor,
batch_size: int,
is_rocm_aiter: bool,
use_upstream_fa: bool,
) -> Tensor