core

package
v0.0.0-...-e8de369 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 21, 2018 License: Apache-2.0 Imports: 41 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// MaxKubernetesEmptyNodeDeletionTime is the maximum time needed by Kubernetes to delete an empty node.
	MaxKubernetesEmptyNodeDeletionTime = 3 * time.Minute
	// MaxCloudProviderNodeDeletionTime is the maximum time needed by cloud provider to delete a node.
	MaxCloudProviderNodeDeletionTime = 5 * time.Minute
	// MaxPodEvictionTime is the maximum time CA tries to evict a pod before giving up.
	MaxPodEvictionTime = 2 * time.Minute
	// EvictionRetryTime is the time after CA retries failed pod eviction.
	EvictionRetryTime = 10 * time.Second
	// PodEvictionHeadroom is the extra time we wait to catch situations when the pod is ignoring SIGTERM and
	// is killed with SIGKILL after MaxGracefulTerminationTime
	PodEvictionHeadroom = 30 * time.Second
)
View Source
const (
	// ReschedulerTaintKey is the name of the taint created by rescheduler.
	ReschedulerTaintKey = "CriticalAddonsOnly"
)
View Source
const (
	// ScaleDownDisabledKey is the name of annotation marking node as not eligible for scale down.
	ScaleDownDisabledKey = "cluster-autoscaler.kubernetes.io/scale-down-disabled"
)

Variables

This section is empty.

Functions

func ConfigurePredicateCheckerForLoop

func ConfigurePredicateCheckerForLoop(unschedulablePods []*apiv1.Pod, schedulablePods []*apiv1.Pod, predicateChecker *simulator.PredicateChecker)

ConfigurePredicateCheckerForLoop can be run to update predicateChecker configuration based on current state of the cluster.

func FilterOutExpendableAndSplit

func FilterOutExpendableAndSplit(unschedulableCandidates []*apiv1.Pod, expendablePodsPriorityCutoff int) ([]*apiv1.Pod, []*apiv1.Pod)

FilterOutExpendableAndSplit filters out expendable pods and splits into:

  • waiting for lower priority pods preemption
  • other pods.

func FilterOutExpendablePods

func FilterOutExpendablePods(pods []*apiv1.Pod, expendablePodsPriorityCutoff int) []*apiv1.Pod

FilterOutExpendablePods filters out expendable pods.

func FilterOutNodesFromNotAutoscaledGroups

func FilterOutNodesFromNotAutoscaledGroups(nodes []*apiv1.Node, cloudProvider cloudprovider.CloudProvider) ([]*apiv1.Node, errors.AutoscalerError)

FilterOutNodesFromNotAutoscaledGroups return subset of input nodes for which cloud provider does not return autoscaled node group.

func FilterOutSchedulable

func FilterOutSchedulable(unschedulableCandidates []*apiv1.Pod, nodes []*apiv1.Node, allScheduled []*apiv1.Pod, podsWaitingForLowerPriorityPreemption []*apiv1.Pod,
	predicateChecker *simulator.PredicateChecker, expendablePodsPriorityCutoff int) []*apiv1.Pod

FilterOutSchedulable checks whether pods from <unschedulableCandidates> marked as unschedulable by Scheduler actually can't be scheduled on any node and filter out the ones that can. It takes into account pods that are bound to node and will be scheduled after lower priority pod preemption.

func FilterSchedulablePodsForNode

func FilterSchedulablePodsForNode(context *context.AutoscalingContext, pods []*apiv1.Pod, nodeGroupId string, nodeInfo *schedulercache.NodeInfo) []*apiv1.Pod

FilterSchedulablePodsForNode filters pods that can be scheduled on the given node.

func GetNodeInfosForGroups

func GetNodeInfosForGroups(nodes []*apiv1.Node, cloudProvider cloudprovider.CloudProvider, kubeClient kube_client.Interface,
	daemonsets []*extensionsv1.DaemonSet, predicateChecker *simulator.PredicateChecker) (map[string]*schedulercache.NodeInfo, errors.AutoscalerError)

GetNodeInfosForGroups finds NodeInfos for all node groups used to manage the given nodes. It also returns a node group to sample node mapping. TODO(mwielgus): This returns map keyed by url, while most code (including scheduler) uses node.Name for a key.

TODO(mwielgus): Review error policy - sometimes we may continue with partial errors.

func ScaleUp

func ScaleUp(context *context.AutoscalingContext, processors *ca_processors.AutoscalingProcessors, clusterStateRegistry *clusterstate.ClusterStateRegistry, unschedulablePods []*apiv1.Pod,
	nodes []*apiv1.Node, daemonSets []*extensionsv1.DaemonSet) (*status.ScaleUpStatus, errors.AutoscalerError)

ScaleUp tries to scale the cluster up. Return true if it found a way to increase the size, false if it didn't and error if an error occurred. Assumes that all nodes in the cluster are ready and in sync with instance groups.

func UpdateClusterStateMetrics

func UpdateClusterStateMetrics(csr *clusterstate.ClusterStateRegistry)

UpdateClusterStateMetrics updates metrics related to cluster state

func UpdateEmptyClusterStateMetrics

func UpdateEmptyClusterStateMetrics()

UpdateEmptyClusterStateMetrics updates metrics related to empty cluster's state. TODO(aleksandra-malinowska): use long unregistered value from ClusterStateRegistry.

Types

type Autoscaler

type Autoscaler interface {
	// RunOnce represents an iteration in the control-loop of CA
	RunOnce(currentTime time.Time) errors.AutoscalerError
	// ExitCleanUp is a clean-up performed just before process termination.
	ExitCleanUp()
}

Autoscaler is the main component of CA which scales up/down node groups according to its configuration The configuration can be injected at the creation of an autoscaler

func NewAutoscaler

func NewAutoscaler(opts AutoscalerOptions) (Autoscaler, errors.AutoscalerError)

NewAutoscaler creates an autoscaler of an appropriate type according to the parameters

type AutoscalerBuilder

type AutoscalerBuilder interface {
	SetDynamicConfig(config dynamic.Config) AutoscalerBuilder
	Build() (Autoscaler, errors.AutoscalerError)
}

AutoscalerBuilder builds an instance of Autoscaler which is the core of CA

type AutoscalerBuilderImpl

type AutoscalerBuilderImpl struct {
	// contains filtered or unexported fields
}

AutoscalerBuilderImpl builds new autoscalers from its state including initial `AutoscalingOptions` given at startup and `dynamic.Config` read on demand from the configmap

func NewAutoscalerBuilder

func NewAutoscalerBuilder(autoscalingOptions context.AutoscalingOptions, predicateChecker *simulator.PredicateChecker,
	kubeClient kube_client.Interface, kubeEventRecorder kube_record.EventRecorder, listerRegistry kube_util.ListerRegistry, processors *ca_processors.AutoscalingProcessors) *AutoscalerBuilderImpl

NewAutoscalerBuilder builds an AutoscalerBuilder from required parameters

func (*AutoscalerBuilderImpl) Build

Build an autoscaler according to the builder's state

func (*AutoscalerBuilderImpl) SetDynamicConfig

func (b *AutoscalerBuilderImpl) SetDynamicConfig(config dynamic.Config) AutoscalerBuilder

SetDynamicConfig sets an instance of dynamic.Config read from a configmap so that the new autoscaler built afterwards reflect the latest configuration contained in the configmap

type AutoscalerOptions

type AutoscalerOptions struct {
	context.AutoscalingOptions
	KubeClient        kube_client.Interface
	KubeEventRecorder kube_record.EventRecorder
	PredicateChecker  *simulator.PredicateChecker
	ListerRegistry    kube_util.ListerRegistry
	Processors        *ca_processors.AutoscalingProcessors
}

AutoscalerOptions is the whole set of options for configuring an autoscaler

type NodeDeleteStatus

type NodeDeleteStatus struct {
	sync.Mutex
	// contains filtered or unexported fields
}

NodeDeleteStatus tells whether a node is being deleted right now.

func (*NodeDeleteStatus) IsDeleteInProgress

func (n *NodeDeleteStatus) IsDeleteInProgress() bool

IsDeleteInProgress returns true if a node is being deleted.

func (*NodeDeleteStatus) SetDeleteInProgress

func (n *NodeDeleteStatus) SetDeleteInProgress(status bool)

SetDeleteInProgress sets deletion process status

type ScaleDown

type ScaleDown struct {
	// contains filtered or unexported fields
}

ScaleDown is responsible for maintaining the state needed to perform unneeded node removals.

func NewScaleDown

func NewScaleDown(context *context.AutoscalingContext, clusterStateRegistry *clusterstate.ClusterStateRegistry) *ScaleDown

NewScaleDown builds new ScaleDown object.

func (*ScaleDown) CleanUp

func (sd *ScaleDown) CleanUp(timestamp time.Time)

CleanUp cleans up the internal ScaleDown state.

func (*ScaleDown) CleanUpUnneededNodes

func (sd *ScaleDown) CleanUpUnneededNodes()

CleanUpUnneededNodes clears the list of unneeded nodes.

func (*ScaleDown) GetCandidatesForScaleDown

func (sd *ScaleDown) GetCandidatesForScaleDown() []*apiv1.Node

GetCandidatesForScaleDown gets candidates for scale down.

func (*ScaleDown) TryToScaleDown

func (sd *ScaleDown) TryToScaleDown(allNodes []*apiv1.Node, pods []*apiv1.Pod, pdbs []*policyv1.PodDisruptionBudget, currentTime time.Time) (ScaleDownResult, errors.AutoscalerError)

TryToScaleDown tries to scale down the cluster. It returns ScaleDownResult indicating if any node was removed and error if such occurred.

func (*ScaleDown) UpdateUnneededNodes

func (sd *ScaleDown) UpdateUnneededNodes(
	nodes []*apiv1.Node,
	nodesToCheck []*apiv1.Node,
	pods []*apiv1.Pod,
	timestamp time.Time,
	pdbs []*policyv1.PodDisruptionBudget) errors.AutoscalerError

UpdateUnneededNodes calculates which nodes are not needed, i.e. all pods can be scheduled somewhere else, and updates unneededNodes map accordingly. It also computes information where pods can be rescheduled and node utilization level. Timestamp is the current timestamp. The computations are made only for the nodes managed by CA.

type ScaleDownResult

type ScaleDownResult int

ScaleDownResult represents the state of scale down.

const (
	// ScaleDownError - scale down finished with error.
	ScaleDownError ScaleDownResult = iota
	// ScaleDownNoUnneeded - no unneeded nodes and no errors.
	ScaleDownNoUnneeded
	// ScaleDownNoNodeDeleted - unneeded nodes present but not available for deletion.
	ScaleDownNoNodeDeleted
	// ScaleDownNodeDeleted - a node was deleted.
	ScaleDownNodeDeleted
	// ScaleDownNodeDeleteStarted - a node deletion process was started.
	ScaleDownNodeDeleteStarted
)

type StaticAutoscaler

type StaticAutoscaler struct {
	// AutoscalingContext consists of validated settings and options for this autoscaler
	*context.AutoscalingContext
	kube_util.ListerRegistry
	// contains filtered or unexported fields
}

StaticAutoscaler is an autoscaler which has all the core functionality of a CA but without the reconfiguration feature

func NewStaticAutoscaler

func NewStaticAutoscaler(opts context.AutoscalingOptions, predicateChecker *simulator.PredicateChecker,
	kubeClient kube_client.Interface, kubeEventRecorder kube_record.EventRecorder, listerRegistry kube_util.ListerRegistry,
	processors *ca_processors.AutoscalingProcessors) (*StaticAutoscaler, errors.AutoscalerError)

NewStaticAutoscaler creates an instance of Autoscaler filled with provided parameters

func (*StaticAutoscaler) ExitCleanUp

func (a *StaticAutoscaler) ExitCleanUp()

ExitCleanUp removes status configmap.

func (*StaticAutoscaler) RunOnce

func (a *StaticAutoscaler) RunOnce(currentTime time.Time) errors.AutoscalerError

RunOnce iterates over node groups and scales them up/down if necessary

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL