When "Best Effort" mode is enabled, the default latency threshold is set at 1000 microseconds, and the default recovery percentage at 20%. The device will enter "Best Effort" mode when latency reaches 1000 microseconds, and will exit the "Best Effort" mode when latency drops to 200 microseconds (20% of 1000). When the latency reaches the default or user-defined threshold, permitted traffic is shunted around the inspection engine until latency falls to the defined recovery percentage. As an example, at default settings (1000µs/20%), a single TCP stream ramps from 318 Mbps to 552 Mbps by shunting 50% of packets.
Example: If you are running a video application thru the IPS device and the video is choppy, you could turn on "Best Effort" in order to address the problem.
How to manage "Best Effort": To manage "Best Effort" you have to access the IPS via the CLI and execute the "debug np best-effort" command with appropriate subcommand.
|debug np best-effort parameters|
|enable||Enables "Best Effort" mode.||debug np best-effort enable|
|disable||Disables "Best Effort" mode.||debug np best-effort disable|
|-queue-latency||Defines the latency threshold at which "Best Effort" mode is entered. The default is 1000 microseconds.||debug np best-effort enable -queue-latency <microseconds>|
|-recover-percent||Defines the recovery percentage at which "Best Effort" mode is exited. The default is 20%.||debug np best-effort enable -recover-percent <percent>|
Best Effort" and "show np tier-stats": When "Best Effort" mode is enabled on the IPS and you execute the "show np tier-stats" CLI command a new parameter "Ratio to best effort" is displayed;
What is "Ratio to Best Effort"? Ratio to best effort is the amount of traffic being trusted at that tier. If tier 3 receives 100Mbps, and you have a ratio to best effort of 10%, then 10Mbps of traffic would be trusted, the remaining 90% will either be clean (not hit a trigger), or pass to tier 4 for reassembly or threat verification