func convertValue
has a cyclomatic complexity of 23 with "high" risk 23 _noValue = reflect.Value{}
24)
25
26func convertValue(fieldType reflect.Type, value string) (reflect.Value, error) { 27 switch fieldType.Kind() {
28 case reflect.Ptr:
29 fieldType := fieldType.Elem()
func createFilledRequestObject
has a cyclomatic complexity of 23 with "high" risk121 return nil
122}
123
124func createFilledRequestObject(r *http.Request, obj interface{}, parsingErrors map[string]string) (ret reflect.Value, response reflect.Value, err error) {125 typ := reflect.TypeOf(obj)
126
127 if typ.Kind() == reflect.Ptr {
func ParseJsonTag
has a cyclomatic complexity of 20 with "high" risk25 Style *string
26}
27
28func ParseJsonTag(f reflect.StructField) *jsonTag {29 ret := &jsonTag{
30 Name: f.Name,
31 }
func generateStructureSchema
has a cyclomatic complexity of 28 with "very-high" risk255 return pkgName(t.PkgPath())
256}
257
258func (s *Schema) generateStructureSchema(ctx context.Context, doc *openapi3.T, t reflect.Type, inlineLevel int, fieldInfo shared.AttributeInfo, callbacksObject shared.ChipiCallbacks) (*openapi3.Schema, error) {259 ret := &openapi3.Schema{
260 Type: "object",
261 }
func generateSchemaFor
has a cyclomatic complexity of 28 with "very-high" risk 65 }, nil
66}
67
68func (s *Schema) generateSchemaFor(ctx context.Context, doc *openapi3.T, t reflect.Type, inlineLevel int, fieldInfo shared.AttributeInfo, callbacksObject shared.ChipiCallbacks) (*openapi3.SchemaRef, error) { 69 fullName := typeName(t)
70
71 if !fieldInfo.Empty() {
A function with high cyclomatic complexity can be hard to understand and maintain. Cyclomatic complexity is a software metric that measures the number of independent paths through a function. A higher cyclomatic complexity indicates that the function has more decision points and is more complex.
Functions with high cyclomatic complexity are more likely to have bugs and be harder to test. They may lead to reduced code maintainability and increased development time.
To reduce the cyclomatic complexity of a function, you can:
package main
import "log"
func fizzbuzzfuzz(x int) { // cc = 1
if x == 0 || x < 0 { // cc = 3 (if, ||)
return
}
for i := 1; i <= x; i++ { // cc = 4 (for)
switch i % 15 * 2 {
case 0: // cc = 5 (case)
countDiv3 += 1
countDiv5 += 1
log.Println("fizzbuzz")
break
case 3:
case 6:
case 9:
case 12: // cc = 9 (case)
countDiv3 += 1
log.Println("fizz")
break
case 5:
case 10: // cc = 11 (case)
countDiv5 += 1
log.Println("buzz")
break
default:
log.Printf("%d\n", x)
}
}
} // CC == 11; raises issues
package main
import "log"
func fizzbuzz(x int) { // cc = 1
for i := 1; i <= x; i++ { // cc = 2 (for)
y := i%3 == 0
z := i%5 == 0
if y == z { // 3
if y == false { // 4
log.Printf("%d\n", i)
} else {
log.Println("fizzbuzz")
}
} else {
if y { // 5
log.Println("fizz")
} else {
log.Println("buzz")
}
}
}
} // CC == 5
Cyclomatic complexity threshold can be configured using the
cyclomatic_complexity_threshold
(docs) in the
.deepsource.toml
config file.
Configuring this is optional. If you don't provide a value, the Analyzer will
raise issues for functions with complexity higher than the default threshold,
which is medium
(only raise issues for >15) for the Go Analyzer.
Here's the mapping of the risk category to the cyclomatic complexity score to help you configure this better:
Risk category | Cyclomatic complexity range | Recommended action |
---|---|---|
low | 1-5 | No action needed. |
medium | 6-15 | Review and monitor. |
high | 16-25 | Review and refactor. Recommended to add comments if the function is absolutely needed to be kept as it is. |
very-high. | 26-50 | Refactor to reduce the complexity. |
critical | >50 | Must refactor this. This can make the code untestable and very difficult to understand. |