Hi Tobias, thank you for your reply! I have questions about types. Could you please answer them?
Questions related to “type_for_interval”: 1. What happens in these lines? int precision = MAX (mpz_sizeinbase (bound_one, 2), mpz_sizeinbase (bound_two, 2)); if (precision > BITS_PER_WORD) { gloog_error = true; return integer_type_node; } Do we try to count maximum number of value bits in bound_one and bound_two? Why can't it be greater than BITS_PER_WORD? 2. Why do we want to generate signed types as much as possible? 3. Why do we always have enough precision in case of precision < wider_precision? Questions related to “clast_to_gcc_expression”: 4. What is the idea behind this code? if (POINTER_TYPE_P (TREE_TYPE (name)) != POINTER_TYPE_P (type)) name = convert_to_ptrofftype (name); 5. Why do we check POINTER_TYPE_P(type)? (“type” has tree type and the manual says that a tree is a pointer type) Questions related to “max_precision_type”: 6. Why is type1, for example, is the maximal precision type in case of truth of POINTER_TYPE_P (type1)? 7. Why do we have enough precision for p2 in case of p1 > p2 and signed type1? 8. Why do we always build signed integer type in the line: “type = build_nonstandard_integer_type (precision, false);”? Questions related to “type_for_clast_red”: 9. Why do we use this code in case of clast_red_sum? value_min (m1, bound_one, bound_two); value_min (m2, b1, b2); mpz_add (bound_one, m1, m2); Can bound_one be greater then bound_two? (We also consider two cases in “type_for_interval”) 10. Why do we assume that new bounds are min(bound_one, bound_two) and min(b1, b2) in case of clast_red_min? -- Cheers, Roman Gareev