Documentation ¶
Index ¶
- Variables
- func NewLogger(logLevel zapcore.Level) (*zap.SugaredLogger, error)
- func RootContext(logger SugaredLogger) (context.Context, func())
- type ConfigUpdate
- type DoRetryChecker
- type DoRetryer
- type Doer
- type Loadtest
- type LoadtestOption
- func FlushRetriesOnShutdown(b bool) LoadtestOption
- func FlushRetriesTimeout(d time.Duration) LoadtestOption
- func Interval(d time.Duration) LoadtestOption
- func Logger(logger SugaredLogger) LoadtestOption
- func MaxIntervalTasks(n int) LoadtestOption
- func MaxTasks(max int) LoadtestOption
- func MaxWorkers(max int) LoadtestOption
- func MetricsCsvFilename(s string) LoadtestOption
- func MetricsCsvFlushInterval(d time.Duration) LoadtestOption
- func MetricsCsvWriterDisabled(b bool) LoadtestOption
- func NumIntervalTasks(n int) LoadtestOption
- func NumWorkers(n int) LoadtestOption
- func OutputBufferingFactor(factor int) LoadtestOption
- func RetriesDisabled(b bool) LoadtestOption
- type RetryChecker
- type Retryer
- type SugaredLogger
- type TaskProvider
Constants ¶
This section is empty.
Variables ¶
var ( ErrBadReadTasksImpl = errors.New("bad ReadTasks implementation: returned a value less than zero or larger than the input slice length") ErrRetriesFailedToFlush = errors.New("failed to flush all retries") )
Functions ¶
func RootContext ¶
func RootContext(logger SugaredLogger) (context.Context, func())
RootContext returns a context that is canceled when the system process receives an interrupt, sigint, or sigterm
Also returns a function that can be used to cancel the context.
Types ¶
type ConfigUpdate ¶
type ConfigUpdate struct {
// contains filtered or unexported fields
}
func (*ConfigUpdate) SetInterval ¶
func (cu *ConfigUpdate) SetInterval(d time.Duration)
func (*ConfigUpdate) SetNumIntervalTasks ¶
func (cu *ConfigUpdate) SetNumIntervalTasks(n int)
func (*ConfigUpdate) SetNumWorkers ¶
func (cu *ConfigUpdate) SetNumWorkers(n int)
type DoRetryChecker ¶
type DoRetryChecker interface { DoRetryer RetryChecker }
type DoRetryer ¶
DoRetryer interface is useful for tasks that have no retry count upper bound
If you need to have a retry upper bound, then have your task implement DoRetryChecker
type Doer ¶
Doer is a basic task unit
If your Doer also implements Retryer then note that Doer can be run again in a different thread if your worker size is greater than one.
If you want your task to have a retry upper bound then implement DoRetryChecker
type Loadtest ¶
type Loadtest struct {
// contains filtered or unexported fields
}
func NewLoadtest ¶
func NewLoadtest(taskProvider TaskProvider, options ...LoadtestOption) (*Loadtest, error)
func (*Loadtest) NewHttpTransport ¶
HttpTransport returns a new configured *http.Transport which implements http.RoundTripper that can be used in tasks which have http clients
Note, you may need to increase the value of MaxIdleConns if your tasks target multiple hosts. MaxIdleConnsPerHost does not override the limit established by MaxIdleConns and if the tasks are expected to communicate to multiple hosts you probably need to apply some scaling factor to it to let connections go idle for a time and still be reusable.
Note that if you are not connecting to a load balancer which preserves connections to a client much of the intent we're trying to establish here is not applicable.
Also if the load balancer does not have "max connection lifespan" behavior nor a "round robin" or "connection balancing" feature without forcing the loadtesting client to reconnect then as you increase load your established connections may prevent the spread of load to newly scaled-out recipients of that load.
By default golang's http standard lib does not expose a way for us to attempt to address this. The problem is also worse if your load balancer ( or number of exposed ips for a dns record ) increase.
Rectifying this issue requires a fix like/option like https://github.com/golang/go/pull/46714 to be accepted by the go maintainers.
type LoadtestOption ¶
type LoadtestOption func(*loadtestConfig)
func FlushRetriesOnShutdown ¶
func FlushRetriesOnShutdown(b bool) LoadtestOption
FlushRetriesOnShutdown is useful when your loadtest is more like a smoke test that must have all tasks flush and be successful
func FlushRetriesTimeout ¶
func FlushRetriesTimeout(d time.Duration) LoadtestOption
FlushRetriesTimeout is only relevant when FlushRetriesOnShutdown(true) is used
func Interval ¶
func Interval(d time.Duration) LoadtestOption
func Logger ¶
func Logger(logger SugaredLogger) LoadtestOption
func MaxIntervalTasks ¶
func MaxIntervalTasks(n int) LoadtestOption
func MaxTasks ¶
func MaxTasks(max int) LoadtestOption
MaxTasks sets an upper bound on the number of tasks the loadtest could perform
func MaxWorkers ¶
func MaxWorkers(max int) LoadtestOption
func MetricsCsvFilename ¶
func MetricsCsvFilename(s string) LoadtestOption
func MetricsCsvFlushInterval ¶
func MetricsCsvFlushInterval(d time.Duration) LoadtestOption
func MetricsCsvWriterDisabled ¶
func MetricsCsvWriterDisabled(b bool) LoadtestOption
func NumIntervalTasks ¶
func NumIntervalTasks(n int) LoadtestOption
func NumWorkers ¶
func NumWorkers(n int) LoadtestOption
func OutputBufferingFactor ¶
func OutputBufferingFactor(factor int) LoadtestOption
func RetriesDisabled ¶
func RetriesDisabled(b bool) LoadtestOption
RetriesDisabled causes loadtester to ignore retry logic present on tasks
type RetryChecker ¶
type SugaredLogger ¶
type SugaredLogger interface { // Fatalw will call os.Exit(1), so use with care only when it makes sense Fatalw(msg string, keysAndValues ...interface{}) Debugw(msg string, keysAndValues ...interface{}) Warnw(msg string, keysAndValues ...interface{}) Errorw(msg string, keysAndValues ...interface{}) Infow(msg string, keysAndValues ...interface{}) // Panicw will panic, so use with care Panicw(msg string, keysAndValues ...interface{}) }
type TaskProvider ¶
type TaskProvider interface { // ReadTasks fills the provided slice up to slice length starting at index 0 and returns how many records have been inserted // // Failing to fill the whole slice will signal the end of the loadtest. // // Note in general you should not use this behavior to signal loadtests to stop // if your loadtest needs to be time-bound. For that case you should signal a stop // via the context. This stop on failure to fill behavior only exists for cases // where the author wants to exhaustively run a set of tasks and not bound the // loadtest to a timespan but rather completeness of the tasks space. // // Note that if you have only partially filled the slice, those filled task slots // will still be run before termination of the loadtest. ReadTasks([]Doer) int // UpdateConfigChan should return the same channel each time or nil; // but once nil it must never be non-nil again UpdateConfigChan() <-chan ConfigUpdate }
TaskProvider describes how to read tasks into a loadtest and how to control a loadtest's configuration over time