func sequentialMerge
has a cyclomatic complexity of 25 with "high" risk120
121var reMergeKey = regexp.MustCompile(`\{\{\.Resp(\d+)_([\w-\.]+)\}\}`)
122
123func sequentialMerge(reqCloner func(*Request) *Request, patterns []string, timeout time.Duration, rc ResponseCombiner, next ...Proxy) Proxy {124 return func(ctx context.Context, request *Request) (*Response, error) {
125 localCtx, cancel := context.WithTimeout(ctx, timeout)
126
func newPluginMiddleware
has a cyclomatic complexity of 17 with "high" risk 42 fmt.Sprintf("%s %s -> %s", remote.ParentEndpointMethod, remote.ParentEndpoint, remote.URLPattern), cfg)
43}
44
45func newPluginMiddleware(logger logging.Logger, tag, pattern string, cfg map[string]interface{}) Middleware { 46 plugins, ok := cfg["name"].([]interface{})
47 if !ok {
48 return emptyMiddlewareFallback(logger)
func TestNewHTTPProxy_ok
has a cyclomatic complexity of 19 with "high" risk 20 "github.com/luraproject/lura/v2/transport/http/client"
21)
22
23func TestNewHTTPProxy_ok(t *testing.T) { 24 expectedMethod := "GET"
25 backendServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
26 if r.ContentLength != 11 {
func generateCerts
has a cyclomatic complexity of 17 with "high" risk 22 }
23}
24
25func generateCerts() error { 26 hosts := []string{"127.0.0.1", "::1", "localhost"}
27
28 priv, err := rsa.GenerateKey(rand.Reader, 2048)
func NewEngine
has a cyclomatic complexity of 17 with "high" risk 31}
32
33// NewEngine returns an initialized gin engine
34func NewEngine(cfg config.ServiceConfig, opt EngineOptions) *gin.Engine { 35 gin.SetMode(gin.ReleaseMode)
36 if cfg.Debug {
37 opt.Logger.Debug(logPrefix, "Debug enabled")
A function with high cyclomatic complexity can be hard to understand and maintain. Cyclomatic complexity is a software metric that measures the number of independent paths through a function. A higher cyclomatic complexity indicates that the function has more decision points and is more complex.
Functions with high cyclomatic complexity are more likely to have bugs and be harder to test. They may lead to reduced code maintainability and increased development time.
To reduce the cyclomatic complexity of a function, you can:
package main
import "log"
func fizzbuzzfuzz(x int) { // cc = 1
if x == 0 || x < 0 { // cc = 3 (if, ||)
return
}
for i := 1; i <= x; i++ { // cc = 4 (for)
switch i % 15 * 2 {
case 0: // cc = 5 (case)
countDiv3 += 1
countDiv5 += 1
log.Println("fizzbuzz")
break
case 3:
case 6:
case 9:
case 12: // cc = 9 (case)
countDiv3 += 1
log.Println("fizz")
break
case 5:
case 10: // cc = 11 (case)
countDiv5 += 1
log.Println("buzz")
break
default:
log.Printf("%d\n", x)
}
}
} // CC == 11; raises issues
package main
import "log"
func fizzbuzz(x int) { // cc = 1
for i := 1; i <= x; i++ { // cc = 2 (for)
y := i%3 == 0
z := i%5 == 0
if y == z { // 3
if y == false { // 4
log.Printf("%d\n", i)
} else {
log.Println("fizzbuzz")
}
} else {
if y { // 5
log.Println("fizz")
} else {
log.Println("buzz")
}
}
}
} // CC == 5
Cyclomatic complexity threshold can be configured using the
cyclomatic_complexity_threshold
(docs) in the
.deepsource.toml
config file.
Configuring this is optional. If you don't provide a value, the Analyzer will
raise issues for functions with complexity higher than the default threshold,
which is medium
(only raise issues for >15) for the Go Analyzer.
Here's the mapping of the risk category to the cyclomatic complexity score to help you configure this better:
Risk category | Cyclomatic complexity range | Recommended action |
---|---|---|
low | 1-5 | No action needed. |
medium | 6-15 | Review and monitor. |
high | 16-25 | Review and refactor. Recommended to add comments if the function is absolutely needed to be kept as it is. |
very-high. | 26-50 | Refactor to reduce the complexity. |
critical | >50 | Must refactor this. This can make the code untestable and very difficult to understand. |