func ReleasePromote
has a cyclomatic complexity of 22 with "high" risk 96 return rs, nil
97}
98
99func (p *Provider) ReleasePromote(app, id string, opts structs.ReleasePromoteOptions) error {100 a, err := p.AppGet(app)
101 if err != nil {
102 return errors.WithStack(err)
func processFromPod
has a cyclomatic complexity of 19 with "high" risk621 return v
622}
623
624func (p *Provider) processFromPod(pd ac.Pod) (*structs.Process, error) {625 app := pd.ObjectMeta.Labels["app"]
626
627 c, err := primaryContainer(pd.Spec.Containers, pd.ObjectMeta.Labels["app"])
func podSpecFromRunOptions
has a cyclomatic complexity of 18 with "high" risk525 return ps, nil
526}
527
528func (p *Provider) podSpecFromRunOptions(app, service string, opts structs.ProcessRunOptions) (*ac.PodSpec, error) {529 s, err := p.podSpecFromService(app, service, common.DefaultString(opts.Release, ""))
530 if err != nil {
531 return nil, errors.WithStack(err)
func podSpecFromService
has a cyclomatic complexity of 19 with "high" risk388 }
389}
390
391func (p *Provider) podSpecFromService(app, service, release string) (*ac.PodSpec, error) {392 a, err := p.AppGet(app)
393 if err != nil {
394 return nil, errors.WithStack(err)
func GetRackMetrics
has a cyclomatic complexity of 17 with "high" risk 30 }
31}
32
33func (m *MetricScraperClient) GetRackMetrics(opts structs.MetricsOptions) (structs.Metrics, error) { 34 if m.host == "" {
35 return nil, errors.WithStack(fmt.Errorf("unimplemented"))
36 }
A function with high cyclomatic complexity can be hard to understand and maintain. Cyclomatic complexity is a software metric that measures the number of independent paths through a function. A higher cyclomatic complexity indicates that the function has more decision points and is more complex.
Functions with high cyclomatic complexity are more likely to have bugs and be harder to test. They may lead to reduced code maintainability and increased development time.
To reduce the cyclomatic complexity of a function, you can:
package main
import "log"
func fizzbuzzfuzz(x int) { // cc = 1
if x == 0 || x < 0 { // cc = 3 (if, ||)
return
}
for i := 1; i <= x; i++ { // cc = 4 (for)
switch i % 15 * 2 {
case 0: // cc = 5 (case)
countDiv3 += 1
countDiv5 += 1
log.Println("fizzbuzz")
break
case 3:
case 6:
case 9:
case 12: // cc = 9 (case)
countDiv3 += 1
log.Println("fizz")
break
case 5:
case 10: // cc = 11 (case)
countDiv5 += 1
log.Println("buzz")
break
default:
log.Printf("%d\n", x)
}
}
} // CC == 11; raises issues
package main
import "log"
func fizzbuzz(x int) { // cc = 1
for i := 1; i <= x; i++ { // cc = 2 (for)
y := i%3 == 0
z := i%5 == 0
if y == z { // 3
if y == false { // 4
log.Printf("%d\n", i)
} else {
log.Println("fizzbuzz")
}
} else {
if y { // 5
log.Println("fizz")
} else {
log.Println("buzz")
}
}
}
} // CC == 5
Cyclomatic complexity threshold can be configured using the
cyclomatic_complexity_threshold
(docs) in the
.deepsource.toml
config file.
Configuring this is optional. If you don't provide a value, the Analyzer will
raise issues for functions with complexity higher than the default threshold,
which is medium
(only raise issues for >15) for the Go Analyzer.
Here's the mapping of the risk category to the cyclomatic complexity score to help you configure this better:
Risk category | Cyclomatic complexity range | Recommended action |
---|---|---|
low | 1-5 | No action needed. |
medium | 6-15 | Review and monitor. |
high | 16-25 | Review and refactor. Recommended to add comments if the function is absolutely needed to be kept as it is. |
very-high. | 26-50 | Refactor to reduce the complexity. |
critical | >50 | Must refactor this. This can make the code untestable and very difficult to understand. |