GCC Middle and Back End API Reference

Go to the source code of this file.
Variables  
int  folding_initializer 
bool  force_folding_builtin_constant_p 

inline 
bool addr_expr_of_non_mem_decl_p  (  tree  ) 
Is it an ADDR_EXPR of a DECL that's not in memory?
int aggregate_value_p  (  const_tree  , 
const_tree  
) 
bool alloca_call_p  (  const_tree  ) 
int allocate_decl_uid  (  void  ) 
Allocate and return a new UID from the DECL_UID namespace.
Referenced by make_node_stat().
void allocate_struct_function  (  tree  , 
bool  
) 
bool array_at_struct_end_p  (  tree  ) 
Return a tree of sizetype representing the size, in bytes, of the element of EXP, an ARRAY_REF or an ARRAY_RANGE_REF.
Return a tree representing the lower bound of the array mentioned in EXP, an ARRAY_REF or an ARRAY_RANGE_REF.
Return a tree representing the upper bound of the array mentioned in EXP, an ARRAY_REF or an ARRAY_RANGE_REF.
tree array_type_nelts  (  const_tree  ) 
void assign_assembler_name_if_neeeded  (  tree  ) 
bool associative_tree_code  (  enum  tree_code  ) 
int attribute_list_contained  (  const_tree  , 
const_tree  
) 
int attribute_list_equal  (  const_tree  , 
const_tree  
) 
bool auto_var_in_fn_p  (  const_tree  , 
const_tree  
) 
bool avoid_folding_inline_builtin  (  tree  ) 
tree bit_position  (  const_tree  ) 
bool block_may_fallthru  (  const_tree  ) 
location_t* block_nonartificial_location  (  tree  ) 
tree block_ultimate_origin  (  const_tree  ) 

inlinestatic 
_loc versions of build[15].

inlinestatic 

inlinestatic 

inlinestatic 

inlinestatic 
tree build_call_array_loc  (  location_t  loc, 
tree  return_type,  
tree  fn,  
int  nargs,  
const tree *  args  
) 
Build a CALL_EXPR of class tcc_vl_exp with the indicated RETURN_TYPE and FN and a null static chain slot. NARGS is the number of call arguments which are specified as a tree array ARGS.
Referenced by maybe_with_size_expr().
tree build_call_expr_loc  (  location_t  , 
tree  ,  
int  ,  
...  
) 
tree build_call_expr_loc_array  (  location_t  , 
tree  ,  
int  ,  
tree *  
) 
void build_common_builtin_nodes  (  void  ) 
Call this function after instantiating all builtins that the language front end cares about. This will build the rest of the builtins that are relied upon by the tree optimizers and the middleend.
If we're checking the stack, `alloca' can throw.
If there's a possibility that we might use the ARM EABI, build the alternate __cxa_end_cleanup node used to resume from C++ and Java.
The exception object and filter values from the runtime. The argument must be zero before exception lowering, i.e. from the front end. After exception lowering, it will be the region number for the exception landing pad. These functions are PURE instead of CONST to prevent them from being hoisted past the exception edge that will initialize its value in the landing pad.
Only use TM_PURE if we we have TM language support.
Complex multiplication and division. These are handled as builtins rather than optabs because emit_library_call_value doesn't support complex. Further, we can do slightly better with folding these beasties if the real and complex parts of the arguments are separate.
void build_common_tree_nodes  (  bool  , 
bool  
) 
tree build_constructor  (  tree  , 
vec< constructor_elt, va_gc > *  
) 
tree build_decl_stat  (  location_t  , 
enum  tree_code,  
tree  ,  
tree  MEM_STAT_DECL  
) 
This is in treeinline.c since the routine uses data structures from the inliner.
tree build_empty_stmt  (  location_t  ) 
tree build_fold_addr_expr_loc  (  location_t  , 
tree  
) 
tree build_fold_addr_expr_with_type_loc  (  location_t  , 
tree  ,  
tree  
) 
tree build_fold_indirect_ref_loc  (  location_t  , 
tree  
) 
Build variant of function decl ORIG_DECL skipping ARGS_TO_SKIP and the return value if SKIP_RETURN is true. Arguments from DECL_ARGUMENTS list can't be removed now, since they are linked by TREE_CHAIN directly. The caller is responsible for eliminating them when they are being duplicated (i.e. copy_arguments_for_versioning).
For declarations setting DECL_VINDEX (i.e. methods) we expect first argument to be THIS pointer.
When signature changes, we need to clear builtin info.
References tree_low_cst(), lang_hooks_for_types::type_for_size, and lang_hooks::types.
tree build_int_cst  (  tree  , 
HOST_WIDE_INT  
) 
tree build_int_cst_type  (  tree  , 
HOST_WIDE_INT  
) 
tree build_int_cst_wide  (  tree  , 
unsigned  HOST_WIDE_INT,  
HOST_WIDE_INT  
) 

inlinestatic 
Create an INT_CST node with a CST value zero extended.
Referenced by fold_builtin_strcspn().
tree build_invariant_address  (  tree  , 
tree  ,  
HOST_WIDE_INT  
) 
Build a METHOD_TYPE for a member of BASETYPE. The RETTYPE (a TYPE) and ARGTYPES (a TREE_LIST) are the return type and arguments types for the method. An implicit additional parameter (of type pointertoBASETYPE) is added to the ARGTYPES.
Make a node of the sort we want.
The actual arglist for this function includes a "hidden" argument which is "this". Put it into the list of argument types.
If we already have such a type, use the old one.
Set up the canonical type.
References double_int::mask(), mpz_set_double_int(), and tree_to_double_int().
tree build_nonstandard_integer_type  (  unsigned HOST_WIDE_INT  precision, 
int  unsignedp  
) 
Builds a signed or unsigned integer type of precision PRECISION. Used for C bitfields whose precision does not match that of builtin target types.
tree build_nt  (  enum  tree_code, 
...  
) 
Construct various types of nodes.
tree build_omp_clause  (  location_t  , 
enum  omp_clause_code  
) 
tree build_optimization_node  (  struct gcc_options *  opts  ) 
Return a tree node that encapsulates the optimization options in OPTS.
tree build_personality_function  (  const char *  ) 
Constructors for pointer, array and function types. (RECORD_TYPE, UNION_TYPE and ENUMERAL_TYPE nodes are constructed by languagedependent code, not here.)
Construct, lay out and return the type of pointers to TO_TYPE with mode MODE. If CAN_ALIAS_ALL is TRUE, indicate this type can reference all of memory. If such a type has already been constructed, reuse it.
If the pointedto type has the may_alias attribute set, force a TYPE_REF_CAN_ALIAS_ALL pointer to be generated.
In some cases, languages will have things that aren't a POINTER_TYPE (such as a RECORD_TYPE for fat pointers in Ada) as TYPE_POINTER_TO. In that case, return that type without regard to the rest of our operands. ??? This is a kludge, but consistent with the way this function has always operated and there doesn't seem to be a good way to avoid this at the moment.
First, if we already have a type for pointers to TO_TYPE and it's the proper mode, use it.
Lay out the type. This function has many callers that are concerned with expressionconstruction, and this simplifies them all.
Referenced by expand_omp_atomic_load(), make_or_reuse_fract_type(), and simple_cst_equal().
Like get_qualified_type, but creates the type if it does not exist. This function never returns NULL_TREE.
Given a range, LOW, HIGH, and IN_P, an expression, EXP, and a result type, TYPE, return an expression to test if EXP is in (or out of, depending on IN_P) the range. Return 0 if the test couldn't be created.
Disable this optimization for function pointer expressions on targets that require function pointer canonicalization.
Optimize (c>=1) && (c<=127) into (signed char)c > 0.
Optimize (c>=low) && (c<=high) into (clow>=0) && (clow<=highlow). This requires wraparound arithmetics for the type of the expression. First make sure that arithmetics in this type is valid, then make sure that it wraps around.
Check if (unsigned) INT_MAX + 1 == (unsigned) INT_MIN for the type in question, as we rely on this here.
tree build_real_from_int_cst  (  tree  , 
const_tree  
) 
Same as build_pointer_type_for_mode, but for REFERENCE_TYPE.
If the pointedto type has the may_alias attribute set, force a TYPE_REF_CAN_ALIAS_ALL pointer to be generated.
In some cases, languages will have things that aren't a REFERENCE_TYPE (such as a RECORD_TYPE for fat pointers in Ada) as TYPE_REFERENCE_TO. In that case, return that type without regard to the rest of our operands. ??? This is a kludge, but consistent with the way this function has always operated and there doesn't seem to be a good way to avoid this at the moment.
First, if we already have a type for pointers to TO_TYPE and it's the proper mode, use it.
tree build_simple_mem_ref_loc  (  location_t  , 
tree  
) 
tree build_string  (  int  , 
const char *  
) 
tree build_string_literal  (  int  , 
const char *  
) 
tree build_target_option_node  (  struct gcc_options *  opts  ) 
Return a tree node that encapsulates the target options in OPTS.
tree build_tm_abort_call  (  location_t  , 
bool  
) 
In transmem.c.
Return a type like TTYPE except that its TYPE_ATTRIBUTES is ATTRIBUTE. Such modified types already made are recorded so that duplicates are not made.
tree build_vector_from_ctor  (  tree  , 
vec< constructor_elt, va_gc > *  
) 
tree build_vl_exp_stat  (  enum  tree_code, 
int  MEM_STAT_DECL  
) 

inlinestatic 
Return the tree node for an explicit standard builtin function or NULL.
Referenced by assign_parms(), emit_call_1(), expand_omp_atomic_load(), expand_omp_for(), expand_omp_sections(), find_bswap(), fold_builtin_fputs(), get_tm_region_blocks(), gimplify_return_expr(), ipa_tm_diagnose_tm_safe(), lower_emutls_phi_arg(), lower_omp_sections(), maybe_add_implicit_barrier_cancel(), maybe_catch_exception(), maybe_emit_sprintf_chk_warning(), rewrite_call_expr_valist(), and tree_int_cst_sign_bit().

inlinestatic 
Return whether the standard builtin function can be used as an explicit function.
Referenced by tree_class_check_failed(), and tree_range_check_failed().

inlinestatic 
Return the tree node for an implicit builtin function or NULL.
Referenced by avoid_folding_inline_builtin(), fold_builtin_4(), fold_builtin_classify(), fold_builtin_logarithm(), instrument_builtin_call(), instrument_memory_accesses(), lower_gimple_return(), maybe_emit_sprintf_chk_warning(), rewrite_call_expr_valist(), set_strinfo(), split_critical_edges(), and tsan_pass().

inlinestatic 
Return whether the standard builtin function can be used implicitly.
enum built_in_function builtin_mathfn_code  (  const_tree  ) 
rtx builtin_memset_read_str  (  void *  data, 
HOST_WIDE_INT  offset,  
enum machine_mode  mode  
) 
Callback routine for store_by_pieces. Read GET_MODE_BITSIZE (MODE) bytes from constant string DATA + OFFSET and return it as target constant.
tree byte_position  (  const_tree  ) 
void cache_integer_cst  (  tree  ) 
int call_expr_flags  (  const_tree  ) 
int can_move_by_pieces  (  unsigned HOST_WIDE_INT  len, 
unsigned int  align  
) 
In expr.c.
Determine whether the LEN bytes can be moved by using several move instructions. Return nonzero if a call to move_by_pieces should succeed.
Determine whether the LEN bytes can be moved by using several move instructions. Return nonzero if a call to move_by_pieces should succeed.
References move_by_pieces_d::autinc_from, move_by_pieces_d::reverse, and widest_int_mode_for_size().
Referenced by gimplify_init_ctor_eval().
bool categorize_ctor_elements  (  const_tree  ctor, 
HOST_WIDE_INT *  p_nz_elts,  
HOST_WIDE_INT *  p_init_elts,  
bool *  p_complete  
) 
Examine CTOR to discover: * how many scalar fields are set to nonzero values, and place it in *P_NZ_ELTS; * how many scalar fields in total are in CTOR, and place it in *P_ELT_COUNT. * whether the constructor is complete  in the sense that every meaningful byte is explicitly given a value  and place it in *P_COMPLETE. Return whether or not CTOR is a valid static constant initializer, the same as "initializer_constant_valid_p (CTOR, TREE_TYPE (CTOR)) != 0".
int chain_member  (  const_tree  , 
const_tree  
) 
Concatenate two lists (chains of TREE_LIST nodes) X and Y by making the last node in X point to Y. Returns X, except if X is 0 returns Y.
bool check_qualified_type  (  const_tree  , 
const_tree  ,  
int  
) 
Check whether CAND is suitable to be returned from get_qualified_type (BASE, TYPE_QUALS).
void clean_symbol_name  (  char *  ) 
tree combine_comparisons  (  location_t  loc, 
enum tree_code  code,  
enum tree_code  lcode,  
enum tree_code  rcode,  
tree  truth_type,  
tree  ll_arg,  
tree  lr_arg  
) 
Return a tree for the comparison which is the combination of doing the AND or OR (depending on CODE) of the two operations LCODE and RCODE on the identical operands LL_ARG and LR_ARG. Take into account the possibility of trapping if the mode has NaNs, and return NULL_TREE if this makes the transformation invalid.
Eliminate unordered comparisons, as well as LTGT and ORD which are not used unless the mode has NaNs.
Check that the original operation and the optimized ones will trap under the same condition.
In a shortcircuited boolean expression the LHS might be such that the RHS, if evaluated, will never trap. For example, in ORD (x, y) && (x < y), we evaluate the RHS only if neither x nor y is NaN. (This is a mixed blessing: for example, the expression above will never trap, hence optimizing it to x < y would be invalid).
If the comparison was shortcircuited, and only the RHS trapped, we may now generate a spurious trap.
If we changed the conditions that cause a trap, we lose.
Referenced by fold_range_test().
bool commutative_ternary_tree_code  (  enum  tree_code  ) 
bool commutative_tree_code  (  enum  tree_code  ) 
int comp_type_attributes  (  const_tree  , 
const_tree  
) 
Return 0 if the attributes for two types are incompatible, 1 if they are compatible, and 2 if they are nearly compatible (which causes a warning to be generated).
int compare_tree_int  (  const_tree  , 
unsigned  HOST_WIDE_INT  
) 
bool complete_ctor_at_level_p  (  const_tree  type, 
HOST_WIDE_INT  num_elts,  
const_tree  last_type  
) 
TYPE is initialized by a constructor with NUM_ELTS elements, the last of which had type LAST_TYPE. Each element was itself a complete initializer, in the sense that every meaningful byte was explicitly given a value. Return true if the same is true for the constructor as a whole.
??? We could look at each element of the union, and find the largest element. Which would avoid comparing the size of the initialized element against any tail padding in the union. Doesn't seem worth the effort...
Return a tree representing the offset, in bytes, of the field referenced by EXP. This does not include any offset in DECL_FIELD_BIT_OFFSET.
unsigned HOST_WIDE_INT compute_builtin_object_size  (  tree  , 
int  
) 
void compute_record_mode  (  tree  ) 
bool constructor_static_from_elts_p  (  const_tree  ) 
Whether a constructor CTOR is a valid static constant initializer if all its elements are. This used to be internal to initializer_constant_valid_p and has been exposed to let other functions like categorize_ctor_elements evaluate the property while walking a constructor for other purposes.
bool contains_bitfld_component_ref_p  (  const_tree  ) 
bool contains_placeholder_p  (  const_tree  ) 
Return true if EXP contains a PLACEHOLDER_EXPR, i.e. if it represents a size or offset that depends on a field within a record.

inline 

inlinestatic 
Return OFF converted to a pointer offset type suitable as offset for POINTER_PLUS_EXPR. Use location LOC for this conversion.
unsigned crc32_byte  (  unsigned  , 
char  
) 
unsigned crc32_string  (  unsigned  , 
const char *  
) 
In tree.c
unsigned crc32_unsigned  (  unsigned  , 
unsigned  
) 
tree create_artificial_label  (  location_t  ) 
bool cst_and_fits_in_hwi  (  const_tree  ) 
Given a CONSTRUCTOR CTOR, return the element values as a vector.
bool cxx11_attribute_p  (  const_tree  ) 
void debug  (  const tree_node &  ref  ) 
void debug  (  const tree_node *  ptr  ) 
void debug_body  (  const tree_node &  ref  ) 
void debug_body  (  const tree_node *  ptr  ) 
void debug_fold_checksum  (  const_tree  ) 
void debug_head  (  const tree_node &  ref  ) 
void debug_head  (  const tree_node *  ptr  ) 
void debug_raw  (  const tree_node &  ref  ) 
void debug_raw  (  const tree_node *  ptr  ) 
void debug_tree  (  tree  ) 
In printtree.c
void debug_verbose  (  const tree_node &  ref  ) 
void debug_verbose  (  const tree_node *  ptr  ) 
bool decl_address_invariant_p  (  const_tree  ) 
bool decl_address_ip_invariant_p  (  const_tree  ) 
bool decl_assembler_name_equal  (  tree  decl, 
const_tree  asmname  
) 
hashval_t decl_assembler_name_hash  (  const_tree  asmname  ) 
Process the attributes listed in ATTRIBUTES and install them in *NODE, which is either a DECL (including a TYPE_DECL) or a TYPE. If a DECL, it should be modified in place; if a TYPE, a copy should be created unless ATTR_FLAG_TYPE_IN_PLACE is set in FLAGS. FLAGS gives further information, in the form of a bitwise OR of flags in enum attribute_flags from tree.h. Depending on these flags, some attributes may be returned to be applied at a later stage (for example, to apply a decl attribute to the declaration rather than to its type).
bool decl_binds_to_current_def_p  (  tree  ) 
Referenced by align_variable(), and default_elf_select_section().
enum tls_model decl_default_tls_model  (  const_tree  ) 
void decl_fini_priority_insert  (  tree  , 
priority_type  
) 
priority_type decl_fini_priority_lookup  (  tree  ) 
tree decl_function_context  (  const_tree  ) 
Return the FUNCTION_DECL which provides this _DECL with its context, or zero if none.
void decl_init_priority_insert  (  tree  , 
priority_type  
) 
priority_type decl_init_priority_lookup  (  tree  ) 
bool decl_replaceable_p  (  tree  ) 
tree decl_type_context  (  const_tree  ) 
Return the RECORD_TYPE, UNION_TYPE, or QUAL_UNION_TYPE which provides this _DECL with its context, or zero if none.
void declare_weak  (  tree  ) 
Declare DECL to be a weak symbol.
tree div_if_zero_remainder  (  enum  tree_code, 
const_tree  ,  
const_tree  
) 
bool double_int_fits_to_tree_p  (  const_tree  , 
double_int  
) 
tree double_int_to_tree  (  tree  , 
double_int  
) 
void dump_addr  (  FILE *  , 
const char *  ,  
const void *  
) 
void dump_tree_statistics  (  void  ) 
Print debugging information about tree nodes generated during the compile, and any languagespecific information.
References targetm.
unsigned int element_precision  (  const_tree  ) 
void expand_asm_stmt  (  gimple  ) 
void expand_computed_goto  (  tree  ) 
In stmt.c
void expand_dummy_function_end  (  void  ) 
Undo the effects of init_dummy_function_start.
End any sequences that failed to be closed due to syntax errors.
Outside function body, can't compute type's actual size until next function's body starts.
void expand_function_end  (  void  ) 
Generate RTL for the end of the current function.
If arg_pointer_save_area was referenced only from a nested function, we will not have initialized it yet. Do that now.
If we are doing generic stack checking and this function makes calls, do a stack probe at the start of the function to ensure we have enough space for another stack frame.
End any sequences that failed to be closed due to syntax errors.
Output a linenumber for the end of the function. SDB depends on this.
Before the return label (if any), clobber the return registers so that they are not propagated live to the rest of the function. This can only happen with functions that drop through; if there had been a return statement, there would have either been a return rtx, or a jump to the return label. We delay actual code generation after the current_function_value_rtx is computed.
Output the label for the actual return from the function.
Let except.c know where it should emit the call to unregister the function context for sjlj exceptions.
We want to ensure that instructions that may trap are not moved into the epilogue by scheduling, because we don't always emit unwind information for the epilogue.
If this is an implementation of throw, do what's necessary to communicate between __builtin_eh_return and the epilogue.
If scalar return value was computed in a pseudoreg, or was a named return value that got dumped to the stack, copy that to the hard return register.
This should be set in assign_parms.
If this is a BLKmode structure being returned in registers, then use the mode computed in expand_return. Note that if decl_rtl is memory, then its mode may have been changed, but that crtl>return_rtx has not.
If a nonBLKmode return value should be padded at the least significant end of the register, shift it left by the appropriate amount. BLKmode results are handled using the group load/store machinery.
If a named return value dumped decl_return to memory, then we may need to redo the PROMOTE_MODE signed/unsigned extension.
If expand_function_start has created a PARALLEL for decl_rtl, move the result to the real return registers. Otherwise, do a group load from decl_rtl for a named return.
In the case of complex integer modes smaller than a word, we'll need to generate some nontrivial bitfield insertions. Do that on a pseudo and not the hard register.
If returning a structure, arrange to return the address of the value in a place where debuggers expect to find it. If returning a structure PCC style, the caller also depends on this value. And cfun>returns_pcc_struct is not necessarily set.
Mark this as a function return value so integrate will delete the assignment and USE below when inlining this function.
The address may be ptr_mode and OUTGOING may be Pmode.
Show return register used to hold result (in this case the address of the result.
Emit the actual code to clobber return register.
Output the label for the naked return from the function.
@@@ This is a kludge. We want to ensure that instructions that may trap are not moved into the epilogue by scheduling, because we don't always emit unwind information for the epilogue.
If stack protection is enabled for this function, check the guard.
If we had calls to alloca, and this machine needs an accurate stack pointer to exit the function, insert some code to save and restore the stack pointer.
??? This should no longer be necessary since stupid is no longer with us, but there are some parts of the compiler (eg reload_combine, and sh mach_dep_reorg) that still try and compute their own lifetime info instead of using the general framework.
void expand_function_start  (  tree  ) 
void expand_goto  (  tree  ) 
void expand_label  (  tree  ) 
In stmt.c
void expand_main_function  (  void  ) 
In function.c
void expand_return  (  tree  ) 
void expand_stack_restore  (  tree  ) 
rtx expand_stack_save  (  void  ) 
Emit code to save the current value of stack.
unsigned int expr_align  (  const_tree  ) 

inline 
These checks have to be special cased.
int fields_length  (  const_tree  ) 
Returns the number of FIELD_DECLs in a type.
void finalize_size_functions  (  void  ) 
Take, queue and compile all the size functions. It is essential that the size functions be gimplified at the very end of the compilation in order to guarantee transparent handling of selfreferential sizes. Otherwise the GENERIC inliner would not be able to inline them back at each of their call sites, thus creating artificial nonconstant size expressions which would trigger nasty problems later on.
Given a tree EXP, find all occurrences of references to fields in a PLACEHOLDER_EXPR and place them in vector REFS without duplicates. Also record VAR_DECLs and CONST_DECLs. Note that we assume here that EXP contains only arithmetic expressions or CALL_EXPRs with PLACEHOLDER_EXPRs occurring only in their argument list.
void fini_object_sizes  (  void  ) 
Destroy data structures after the object size computation.
Finish up a builtin RECORD_TYPE. Give it a name and provide its fields. Optionally specify an alignment, and then lay it out.
Finish processing a builtin RECORD_TYPE type TYPE. It's name is NAME, its fields are chained in reverse on FIELDS. If ALIGN_TYPE is nonnull, it is given the same alignment as ALIGN_TYPE.
References build_int_cst(), double_int_to_tree(), integer_zerop(), tree_int_cst_lt(), and tree_to_double_int().
void finish_record_layout  (  record_layout_info  , 
int  
) 

inlinestatic 
Initialize the abstract argument list iterator object ITER, then advance past and return the first argument. Useful in for expressions, e.g. for (arg = first_call_expr_arg (exp, &iter); arg; arg = next_call_expr_arg (&iter))
Referenced by delete_unreachable_blocks_update_callgraph().

inlinestatic 
tree first_field  (  const_tree  ) 
Returns the first FIELD_DECL in a type.
int fixed_zerop  (  const_tree  ) 
fixed_zerop (tree x) is nonzero if X is a fixedpoint constant of value 0.
void fixup_signed_type  (  tree  ) 
void fixup_unsigned_type  (  tree  ) 
int flags_from_decl_or_type  (  const_tree  ) 
In calls.c
Fold constants as much as possible in an expression. Returns the simplified expression. Acts only on the top level of the expression; if the argument itself cannot be simplified, its subexpressions are not changed.
Referenced by convert_to_real(), expand_expr_real_1(), and process_assert_insertions_for().
Fold a binary expression of code CODE and type TYPE with operands OP0 and OP1. LOC is the location of the resulting expression. Return the folded expression if folding is successful. Otherwise, return NULL_TREE.
Strip any conversions that don't change the mode. This is safe for every expression, except for a comparison expression because its signedness is derived from its operands. So, in the latter case, only strip conversions that don't change the signedness. MIN_EXPR/MAX_EXPR also need signedness of arguments preserved. Note that this is done as an internal manipulation within the constant folder, in order to find the simplest representation of the arguments so that their form can be studied. In any cases, the appropriate type conversions should be put back in the tree that will get out of the constant folder.
Note that TREE_CONSTANT isn't enough: static var addresses are constant but we can't do arithmetic on them.
Make sure type and arg0 have the same saturating flag.
If this is a commutative operation, and ARG0 is a constant, move it to ARG1 to reduce the number of tests below.
ARG0 is the first operand of EXPR, and ARG1 is the second operand. First check for cases where an arithmetic operation is applied to a compound, conditional, or comparison operation. Push the arithmetic operation inside the compound or conditional to see if any folding can then be done. Convert comparison to conditional for this purpose. The also optimizes nonconstant cases that used to be done in expand_expr. Before we do that, see if this is a BIT_AND_EXPR or a BIT_IOR_EXPR, one of the operands is a comparison and the other is a comparison, a BIT_AND_EXPR with the constant 1, or a truth value. In that case, the code below would make the expression more complex. Change it to a TRUTH_{AND,OR}_EXPR. Likewise, convert a similar NE_EXPR to TRUTH_XOR_EXPR and an EQ_EXPR to the inversion of a TRUTH_XOR_EXPR.
MEM[&MEM[p, CST1], CST2] > MEM[p, CST1 + CST2].
MEM[&a.b, CST2] > MEM[&a, offsetof (a, b) + CST2].
0 +p index > (type)index
PTR +p 0 > PTR
INT +p INT > (PTR)(INT + INT). Stripping types allows for this.
(PTR +p B) +p A > PTR +p (B + A)
PTR_CST +p CST > CST1
Try replacing &a[i1] +p c * i2 with &a[i1 + i2], if c is step of the array. Loop optimizer sometimes produce this type of expressions.
A + (B) > A  B
(A) + B > B  A
Convert ~A + 1 to A.
~X + X is 1.
X + ~X is 1.
X + (X / CST) * CST is X % CST.
Handle (A1 * C1) + (A2 * C2) with A1, A2 or C1, C2 being the same or one. Make sure the type is not saturating and has the signedness of the stripped operands, as fold_plusminus_mult_expr will reassociate. ??? The latter condition should use TYPE_OVERFLOW_* flags instead.
If we are adding two BIT_AND_EXPR's, both of which are and'ing with a constant, and the two constants have no bits in common, we should treat this as a BIT_IOR_EXPR since this may produce more simplifications.
Reassociate (plus (plus (mult) (foo)) (mult)) as (plus (plus (mult) (mult)) (foo)) so that we can take advantage of the factoring cases below.
See if ARG1 is zero and X + ARG1 reduces to X.
Likewise if the operands are reversed.
Convert X + C into X  C.
Fold __complex__ ( x, 0 ) + __complex__ ( 0, y ) to __complex__ ( x, y ). This is not the same for SNaNs or if signed zeros are involved.
Convert x+x into x*2.0.
Convert a + (b*c + d*e) into (a + b*c) + d*e. We associate floats only if the user has specified fassociativemath.
Convert (b*c + d*e) + a into b*c + (d*e +a). We associate floats only if the user has specified fassociativemath.
(A << C1) + (A >> C2) if A is unsigned and C1+C2 is the size of A is a rotate of A by C1 bits.
(A << B) + (A >> (Z  B)) if A is unsigned and Z is the size of A is a rotate of A by B bits.
Only create rotates in complete modes. Other cases are not expanded properly.
In most languages, can't associate operations on floats through parentheses. Rather than remember where the parentheses were, we don't associate floats at all, unless the user has specified fassociativemath. And, we need to make sure type is not saturating.
Split both trees into variables, constants, and literals. Then associate each group together, the constants with literals, then the result with variables. This increases the chances of literals being recombined later and of generating relocatable expressions for the sum of a constant and literal.
Recombine MINUS_EXPR operands by using PLUS_EXPR.
With undefined overflow prefer doing association in a type which wraps on overflow, if that is one of the operand types.
With undefined overflow we can only associate constants with one variable, and constants whose association doesn't overflow.
The only case we can still associate with two variables is if they are the same, modulo negation and bitpattern preserving conversions.
Only do something if we found more than two objects. Otherwise, nothing has changed and we risk infinite recursion.
Preserve the MINUS_EXPR if the negative part of the literal is greater than the positive part. Otherwise, the multiplicative folding code (i.e extract_muldiv) may be fooled in case unsigned constants are subtracted, like in the following example: ((X*2 + 4)  8U)/2.
Don't introduce overflows through reassociation.
Pointer simplifications for subtraction, simple reassociations.
(PTR0 p+ A)  (PTR1 p+ B) > (PTR0  PTR1) + (A  B)
(PTR0 p+ A)  PTR1 > (PTR0  PTR1) + A, assuming PTR0  PTR1 simplifies.
A  (B) > A + B
(A)  B > (B)  A where B is easily negated and we can swap.
Convert A  1 to ~A.
Convert 1  A to ~A.
X  (X / Y) * Y is X % Y.
Fold A  (A & B) into ~B & A.
Fold (A & ~B)  (A & B) into (A ^ B)  B, where B is any power of 2 minus 1.
See if ARG1 is zero and X  ARG1 reduces to X.
(ARG0  ARG1) is the same as (ARG1 + ARG0). So check whether ARG0 is zero and X + ARG0 reduces to X, since that would mean (ARG1 + ARG0) reduces to ARG1.
Fold __complex__ ( x, 0 )  __complex__ ( 0, y ) to __complex__ ( x, y ). This is not the same for SNaNs or if signed zeros are involved.
Fold &x  &x. This can happen from &x.foo  &x. This is unsafe for certain floats even in nonIEEE formats. In IEEE, it is unsafe because it does wrong for NaNs. Also note that operand_equal_p is always false if an operand is volatile.
A  B > A + (B) if B is easily negatable.
Avoid this transformation if B is a positive REAL_CST.
Try folding difference of addresses.
Fold &a[i]  &a[j] to ij.
Handle (A1 * C1)  (A2 * C2) with A1, A2 or C1, C2 being the same or one. Make sure the type is not saturating and has the signedness of the stripped operands, as fold_plusminus_mult_expr will reassociate. ??? The latter condition should use TYPE_OVERFLOW_* flags instead.
(A) * (B) > A * B
Transform x * 1 into x. Make sure to do the negation on the original operand with conversions not stripped because we can only strip nonsignchanging conversions.
Transform x * C into x * C if x is easily negatable.
(a * (1 << b)) is (a << b)
(A + A) * C > A * 2 * C
((T) (X /[ex] C)) * C cancels out if the conversion is signchanging only.
Optimize z * conj(z) for integer complex numbers.
Maybe fold x * 0 to 0. The expressions aren't the same when x is NaN, since x * 0 is also NaN. Nor are they the same in modes with signed zeros, since multiplying a negative value by 0 gives 0, not +0.
In IEEE floating point, x*1 is not equivalent to x for snans. Likewise for complex arithmetic with signed zeros.
Transform x * 1.0 into x.
Convert (C1/X)*C2 into (C1*C2)/X. This transformation may change the result for floating point types due to rounding so it is applied only if fassociativemath was specify.
Strip sign operations from X in X*X, i.e. Y*Y > Y*Y.
Fold z * +I to __complex__ (+__imag z, +__real z). This is not the same for NaNs or if signed zeros are involved.
Optimize z * conj(z) for floating point complex numbers. Guarded by flag_unsafe_math_optimizations as nonfinite imaginary components don't produce scalar results.
Optimizations of root(...)*root(...).
Optimize sqrt(x)*sqrt(x) as x.
Optimize root(x)*root(y) as root(x*y).
Optimize expN(x)*expN(y) as expN(x+y).
Optimizations of pow(...)*pow(...).
Optimize pow(x,y)*pow(z,y) as pow(x*z,y).
Optimize pow(x,y)*pow(x,z) as pow(x,y+z).
Optimize tan(x)*cos(x) as sin(x).
Optimize x*pow(x,c) as pow(x,c+1).
Optimize pow(x,c)*x as pow(x,c+1).
Canonicalize x*x as pow(x,2.0), which is expanded as x*x.
~X  X is 1.
X  ~X is 1.
Canonicalize (X & C1)  C2.
If (C1&C2) == C1, then (X&C1)C2 becomes (X,C2).
If (C1C2) == ~0 then (X&C1)C2 becomes XC2.
Minimize the number of bits set in C1, i.e. C1 := C1 & ~C2, unless (C1 & ~C2)  (C2 & C3) for some C3 is a mask of some mode which allows further optimizations.
If X is a tree of the form (Y * K1) & K2, this might conflict with that optimization from the BIT_AND_EXPR optimizations. This could end up in an infinite recursion.
(X & Y)  Y is (X, Y).
(X & Y)  X is (Y, X).
X  (X & Y) is (Y, X).
X  (Y & X) is (Y, X).
(X & ~Y)  (~X & Y) is X ^ Y
Convert (or (not arg0) (not arg1)) to (not (and (arg0) (arg1))). This results in more efficient code for machines without a NAND instruction. Combine will canonicalize to the first form which will allow use of NAND instructions provided by the backend if they exist.
See if this can be simplified into a rotate first. If that is unsuccessful continue in the association code.
~X ^ X is 1.
X ^ ~X is 1.
If we are XORing two BIT_AND_EXPR's, both of which are and'ing with a constant, and the two constants have no bits in common, we should treat this as a BIT_IOR_EXPR since this may produce more simplifications.
(X  Y) ^ X > Y & ~ X
(Y  X) ^ X > Y & ~ X
X ^ (X  Y) > Y & ~ X
X ^ (Y  X) > Y & ~ X
Convert ~X ^ ~Y to X ^ Y.
Convert ~X ^ C to X ^ ~C.
Fold (X & 1) ^ 1 as (X & 1) == 0.
Fold (X & Y) ^ Y as ~X & Y.
Fold (X & Y) ^ X as ~Y & X.
Fold X ^ (X & Y) as X & ~Y.
Fold X ^ (Y & X) as ~Y & X.
See if this can be simplified into a rotate first. If that is unsuccessful continue in the association code.
~X & X, (X == 0) & X, and !X & X are always zero.
X & ~X , X & (X == 0), and X & !X are always zero.
Canonicalize (X  C1) & C2 as (X & C2)  (C1 & C2).
(X  Y) & Y is (X, Y).
(X  Y) & X is (Y, X).
X & (X  Y) is (Y, X).
X & (Y  X) is (Y, X).
Fold (X ^ 1) & 1 as (X & 1) == 0.
Fold ~X & 1 as (X & 1) == 0.
Fold !X & 1 as X == 0.
Fold (X ^ Y) & Y as ~X & Y.
Fold (X ^ Y) & X as ~Y & X.
Fold X & (X ^ Y) as X & ~Y.
Fold X & (Y ^ X) as ~Y & X.
Fold (X * Y) & (1 << CST) to X * Y if Y is a constant multiple of 1 << CST.
Fold (X * CST1) & CST2 to zero if we can, or drop known zero bits from CST2.
For constants M and N, if M == (1LL << cst)  1 && (N & M) == M, ((A & N) + B) & M > (A + B) & M Similarly if (N & M) == 0, ((A  N) + B) & M > (A + B) & M and for  instead of + (or unary  instead of +) and/or ^ instead of . If B is constant and (B & M) == 0, fold into A & M.
Now we know that arg0 is (C + D) or (C  D) or C and arg1 (M) is == (1LL << cst)  1. Store C into PMOP[0] and D into PMOP[1].
tree_low_cst not used, because we don't care about the upper bits.
If C or D is of the form (A & N) where (N & M) == M, or of the form (A  N) or (A ^ N) where (N & M) == 0, replace it with A.
If C or D is a N where (N & M) == 0, it can be omitted (assumed 0).
Only build anything new if we optimized one or both arguments above.
Perform the operations in a type that has defined overflow behavior.
TEM is now the new binary +,  or unary  replacement.
Simplify ((int)c & 0377) into (int)c, if c is unsigned char.
Convert (and (not arg0) (not arg1)) to (not (or (arg0) (arg1))). This results in more efficient code for machines without a NOR instruction. Combine will canonicalize to the first form which will allow use of NOR instructions provided by the backend if they exist.
If arg0 is derived from the address of an object or function, we may be able to fold this expression using the object or function's alignment.
This works because modulus is a power of 2. If this weren't the case, we'd have to replace it by its greatest powerof2 divisor: modulus & modulus.
Fold (X << C1) & C2 into (X << C1) & (C2  ((1 << C1)  1)) (X >> C1) & C2 into (X >> C1) & (C2  ~((type) 1 >> C1)) if the new mask might be further optimized.
See if more bits can be proven as zero because of zero extension.
See if we can shorten the right shift.
For arithmetic shift if sign bit could be set, zerobits can contain actually sign bits, so no transformation is possible, unless MASK masks them all away. In that case the shift needs to be converted into logical shift.
((X << 16) & 0xff00) is (X, 0).
Only do the transformation if NEWMASK is some integer mode's mask.
Don't touch a floatingpoint divide by zero unless the mode of the constant can represent infinity.
Optimize A / A to 1.0 if we don't care about NaNs or Infinities. Skip the transformation for nonreal operands.
The complex version of the above A / A optimization.
omit_two_operands will call fold_convert for us.
(A) / (B) > A / B
In IEEE floating point, x/1 is not equivalent to x for snans.
In IEEE floating point, x/1 is not equivalent to x for snans.
If ARG1 is a constant, we can convert this to a multiply by the reciprocal. This does not have the same rounding properties, so only do this if freciprocalmath. We can actually always safely do it if ARG1 is a power of two, but it's hard to tell if it is or not in a portable manner.
Find the reciprocal if optimizing and the result is exact. TODO: Complex reciprocal not implemented.
Convert A/B/C to A/(B*C).
Convert A/(B/C) to (A/B)*C.
Convert C1/(X*C2) into (C1/C2)/X.
Optimize sin(x)/cos(x) as tan(x).
Optimize cos(x)/sin(x) as 1.0/tan(x).
Optimize sin(x)/tan(x) as cos(x) if we don't care about NaNs or Infinities.
Optimize tan(x)/sin(x) as 1.0/cos(x) if we don't care about NaNs or Infinities.
Optimize pow(x,c)/x as pow(x,c1).
Optimize a/root(b/c) into a*root(c/b).
Optimize x/expN(y) into x*expN(y).
Optimize x/pow(y,z) into x*pow(y,z).
Optimize (X & (A)) / A where A is a power of 2, to X >> log2(A)
Fall through
Simplify A / (B << N) where A and B are positive and B is a power of 2, to A >> (N + log2(B)).
For unsigned integral types, FLOOR_DIV_EXPR is the same as TRUNC_DIV_EXPR. Rewrite into the latter in this case.
Fall through
X / 1 is X.
Convert A / B to A / B when the type is signed and overflow is undefined.
If arg0 is a multiple of arg1, then rewrite to the fastest div operation, EXACT_DIV_EXPR. Note that only CEIL_DIV_EXPR and FLOOR_DIV_EXPR are rewritten now. At one time others generated faster code, it's not clear if they do after the last round to changes to the DIV code in expmed.c.
X % 1 is always zero, but be sure to preserve any side effects in X.
X % 0, return X % 0 unchanged so that we can get the proper warnings and errors.
0 % X is always zero, but be sure to preserve any side effects in X. Place this after checking for X == 0.
X % 1 is zero.
X % C is the same as X % C.
Avoid this transformation if C is INT_MIN, i.e. C == C.
X % Y is the same as X % Y.
Optimize TRUNC_MOD_EXPR by a power of two into a BIT_AND_EXPR, i.e. "X % C" into "X & (C  1)", if X and C are positive.
Also optimize A % (C << N) where C is a power of 2, to A & ((C << N)  1).
Optimize 1 >> x for arithmetic right shifts.
... fall through ...
Prefer vector1 << scalar to vector1 << vector2 if vector2 is uniform.
Since negative shift count is not welldefined, don't try to compute it in the compiler.
Turn (a OP c1) OP c2 into a OP (c1+c2).
Deal with a OP (c1 + c2) being undefined but (a OP c1) OP c2 being well defined.
Transform (x >> c) << c into x & (1<<c), or transform (x << c) >> c into x & ((unsigned)1 >> c) for unsigned types.
Rewrite an LROTATE_EXPR by a constant into an RROTATE_EXPR by a new constant.
If we have a rotate of a bit operation with the rotate count and the second operand of the bit operation both constant, permute the two operations.
Two consecutive rotates adding up to the precision of the type can be ignored.
Fold (X & C2) << C1 into (X << C1) & (C2 << C1) (X & C2) >> C1 into (X >> C1) & (C2 >> C1) if the latter can be further optimized.
Note that the operands of this must be ints and their values must be 0 or 1. ("true" is a fixed value perhaps depending on the language.)
If first arg is constant zero, return it.
If either arg is constant true, drop it.
Preserve sequence points.
If second arg is constant zero, result is zero, but first arg must be evaluated.
Likewise for first arg, but note that only the TRUTH_AND_EXPR case will be handled here.
!X && X is always false.
X && !X is always false.
A < X && A + 1 > Y ==> A < X && A >= Y. Normally A + 1 > Y means A >= Y && A != MAX, but in this case we know that A < X <= MAX.
Note that the operands of this must be ints and their values must be 0 or true. ("true" is a fixed value perhaps depending on the language.)
If first arg is constant true, return it.
If either arg is constant zero, drop it.
Preserve sequence points.
If second arg is constant true, result is true, but we must evaluate first arg.
Likewise for first arg, but note this only occurs here for TRUTH_OR_EXPR.
!X  X is always true.
X  !X is always true.
(X && !Y)  (!X && Y) is X ^ Y
If the second arg is constant zero, drop it.
If the second arg is constant true, this is a logical inversion.
Identical arguments cancel to zero.
!X ^ X is always true.
X ^ !X is always true.
bool_var != 0 becomes bool_var.
bool_var == 1 becomes bool_var.
bool_var != 1 becomes !bool_var.
bool_var == 0 becomes !bool_var.
!exp != 0 becomes !exp
If this is an equality comparison of the address of two nonweak, unaliased symbols neither of which are extern (since we do not have access to attributes for externs), then we know the result.
We know that we're looking at the address of two nonweak, unaliased, static _DECL nodes. It is both wasteful and incorrect to call operand_equal_p to compare the two ADDR_EXPR nodes. It is wasteful in that all we need to do is test pointer equality for the arguments to the two ADDR_EXPR nodes. It is incorrect to use operand_equal_p as that function is NOT equivalent to a C equality test. It can in fact return false for two objects which would test as equal using the C equality operator.
If this is an EQ or NE comparison of a constant with a PLUS_EXPR or a MINUS_EXPR of a constant, we can convert it into a comparison with a revised constant as long as no overflow occurs.
Similarly for a NEGATE_EXPR.
Similarly for a BIT_XOR_EXPR; X ^ C1 == C2 is X == (C1 ^ C2).
Transform comparisons of the form X + Y CMP X to Y CMP 0.
Transform comparisons of the form C  X CMP X if C % 2 == 1.
If we have X  Y == 0, we can convert that to X == Y and similarly for !=. Don't do this for ordered comparisons due to overflow.
Convert ABS_EXPR<x> == 0 or ABS_EXPR<x> != 0 to x == 0 or x != 0.
If this is an EQ or NE comparison with zero and ARG0 is (1 << foo) & bar, convert it to (bar >> foo) & 1. Both require two operations, but the latter can be done in one less insn on machines that have only twooperand insns or on which a constant cannot be the first operand.
If this is an NE or EQ comparison of zero against the result of a signed MOD operation whose second operand is a power of 2, make the MOD operation unsigned since it is simpler and equivalent.
Fold ((X >> C1) & C2) == 0 and ((X >> C1) & C2) != 0 where C1 is a valid shift constant, and C2 is a power of two, i.e. a single bit.
Check for a valid shift count.
If (C2 << C1) doesn't overflow, then ((X >> C1) & C2) != 0 can be rewritten as (X & (C2 << C1)) != 0.
Otherwise, for signed (arithmetic) shifts, ((X >> C1) & C2) != 0 is rewritten as X < 0, and ((X >> C1) & C2) == 0 is rewritten as X >= 0.
Otherwise, of unsigned (logical) shifts, ((X >> C1) & C2) != 0 is rewritten as (X,false), and ((X >> C1) & C2) == 0 is rewritten as (X,true).
If we have (A & C) == C where C is a power of 2, convert this into (A & C) != 0. Similarly for NE_EXPR.
If we have (A & C) != 0 or (A & C) == 0 and C is the sign bit, then fold the expression into A < 0 or A >= 0.
If we have (A & C) == D where D & ~C != 0, convert this into 0. Similarly for NE_EXPR.
If we have (A  C) == D where C & ~D != 0, convert this into 0. Similarly for NE_EXPR.
If this is a comparison of a field, we may be able to simplify it.
Handle the constant case even without O to make sure the warnings are given.
Optimize comparisons of strlen vs zero to a compare of the first character of the string vs zero. To wit, strlen(ptr) == 0 => *ptr == 0 strlen(ptr) != 0 => *ptr != 0 Other cases should reduce to one of these two (or a constant) due to the return value of strlen being unsigned.
Fold (X >> C) != 0 into X < 0 if C is one less than the width of X. Similarly fold (X >> C) == 0 into X >= 0.
(X ^ Y) == 0 becomes X == Y, and (X ^ Y) != 0 becomes X != Y.
(X ^ Y) == Y becomes X == 0. We know that Y has no sideeffects.
Likewise (X ^ Y) == X becomes Y == 0. X has no sideeffects.
(X ^ C1) op C2 can be rewritten as X op (C1 ^ C2).
Fold (~X & C) == 0 into (X & C) != 0 and (~X & C) != 0 into (X & C) == 0 when C is a single bit.
Fold ((X & C) ^ C) eq/ne 0 into (X & C) ne/eq 0, when the constant C is a power of two, i.e. a single bit.
Likewise, fold ((X ^ C) & C) eq/ne 0 into (X & C) ne/eq 0, when is C is a power of two, i.e. a single bit.
Fold X op Y as X op Y, where op is eq/ne.
Fold (X & C) op (Y & C) as (X ^ Y) & C op 0", and symmetries.
Optimize (X ^ Z) op (Y ^ Z) as X op Y, and symmetries. operand_equal_p guarantees no sideeffects so we don't need to use omit_one_operand on Z.
Optimize (X ^ C1) op (Y ^ C2) as (X ^ (C1 ^ C2)) op Y.
Attempt to simplify equality/inequality comparisons of complex values. Only lower the comparison if the result is known or can be simplified to a single scalar comparison.
Transform comparisons of the form X + C CMP X.
(X  c) > X becomes false.
Likewise (X + c) < X becomes false.
Convert (X  c) <= X to true.
Convert (X + c) >= X to true.
Convert X + c > X and X  c < X to true for integers.
Convert X + c <= X and X  c >= X to false for integers.
Comparisons with the highest or lowest possible integer of the specified precision will have known values.
The GE_EXPR and LT_EXPR cases above are not normally reached because of previous transformations.
We will flip the signedness of the comparison operator associated with the mode of arg1, so the sign bit is specified by this mode. Check that arg1 is the signed max associated with this sign bit.
signed_type does not work on pointer types.
The following case also applies to X < signed_max+1 and X >= signed_max+1 because previous transformations.
If we are comparing an ABS_EXPR with a constant, we can convert all the cases into explicit comparisons, but they may well not be faster than doing the ABS and one comparison. But ABS (X) <= C is a range comparison, which becomes a subtraction and a comparison, and is probably faster.
Convert ABS_EXPR<x> >= 0 to true.
Convert ABS_EXPR<x> < 0 to false.
If X is unsigned, convert X < (1 << Y) into X >> Y == 0 and similarly for >= into !=.
Similarly for X < (cast) (1 << Y). But cast can't be narrowing, otherwise Y might be >= # of bits in X's type and thus e.g. (unsigned char) (1 << Y) for Y 15 might be 0. If the cast is widening, then 1 << Y should have unsigned type, otherwise if Y is number of bits in the signed shift type minus 1, we can't optimize this. E.g. (unsigned long long) (1 << Y) for Y 31 might be 0xffffffff80000000.
If the first operand is NaN, the result is constant.
If the second operand is NaN, the result is constant.
Simplify unordered comparison of something with itself.
Fold (double)float1 CMP (double)float2 into float1 CMP float2.
When pedantic, a compound expression can be neither an lvalue nor an integer constant expression.
Don't let (0, 0) be null pointer constant.
An ASSERT_EXPR should never be passed to fold_binary.
Referenced by create_bb(), fold_mult_zconjz(), optimize_stmt(), and rhs_to_tree().
tree fold_build1_initializer_loc  (  location_t  , 
enum  tree_code,  
tree  ,  
tree  
) 
tree fold_build1_stat_loc  (  location_t  , 
enum  tree_code,  
tree  ,  
tree  MEM_STAT_DECL  
) 
tree fold_build2_initializer_loc  (  location_t  , 
enum  tree_code,  
tree  ,  
tree  ,  
tree  
) 
tree fold_build2_stat_loc  (  location_t  , 
enum  tree_code,  
tree  ,  
tree  ,  
tree  MEM_STAT_DECL  
) 
tree fold_build_call_array_initializer_loc  (  location_t  , 
tree  ,  
tree  ,  
int  ,  
tree *  
) 
tree fold_build_call_array_loc  (  location_t  loc, 
tree  type,  
tree  fn,  
int  nargs,  
tree *  argarray  
) 
Fold a CALL_EXPR expression of type TYPE with operands FN and NARGS arguments in ARGARRAY, and a null static chain. Return a folded expression if successful. Otherwise, return a CALL_EXPR of type TYPE from the given operands as constructed by build_call_array.

inlinestatic 
Build and fold a POINTER_PLUS_EXPR at LOC offsetting PTR by OFF.
Referenced by fold_builtin_4(), fold_builtin_exponent(), fold_builtin_n(), and thunk_adjust().

inlinestatic 
Build and fold a POINTER_PLUS_EXPR at LOC offsetting PTR by OFF.
Referenced by fold_builtin_fputs().
tree fold_builtin_call_array  (  location_t  loc, 
tree  type,  
tree  fn,  
int  n,  
tree *  argarray  
) 
Construct a CALL_EXPR with type TYPE with FN as the function expression. N arguments are passed in the array ARGARRAY.
If last argument is __builtin_va_arg_pack (), arguments to this function are not finalized yet. Defer folding until they are.
First try the transformations that don't require consing up an exp.
If we got this far, we need to build an exp.
tree fold_builtin_fputs  (  location_t  loc, 
tree  arg0,  
tree  arg1,  
bool  ignore,  
bool  unlocked,  
tree  len  
) 
Fold a call to the fputs builtin. ARG0 and ARG1 are the arguments to the call. IGNORE is true if the value returned by the builtin will be ignored. UNLOCKED is true is true if this actually a call to fputs_unlocked. If LEN in nonNULL, it represents the known length of the string. Return NULL_TREE if no simplification was possible.
If we're using an unlocked function, assume the other unlocked functions exist explicitly.
If the return value is used, don't do the transformation.
Verify the arguments in the original call.
Get the length of the string passed to fputs. If the length can't be determined, punt.
FALLTHROUGH
If optimizing for size keep fputs.
New argument list transforming fputs(string, stream) to fwrite(string, 1, len, stream).
References build_call_expr_loc(), builtin_decl_explicit(), fold_build_pointer_plus_loc(), fold_convert_loc(), host_integerp(), integer_all_onesp(), len, omit_one_operand_loc(), operand_equal_p(), tree_int_cst_lt(), and validate_arg().
Referenced by fold_builtin_classify().
tree fold_builtin_memory_chk  (  location_t  loc, 
tree  fndecl,  
tree  dest,  
tree  src,  
tree  len,  
tree  size,  
tree  maxlen,  
bool  ignore,  
enum built_in_function  fcode  
) 
Fold a call to the __mem{cpy,pcpy,move,set}_chk builtin. DEST, SRC, LEN, and SIZE are the arguments to the call. IGNORE is true, if return value can be ignored. FCODE is the BUILT_IN_* code of the builtin. If MAXLEN is not NULL, it is maximum length passed as third argument.
If SRC and DEST are the same (and not volatile), return DEST (resp. DEST+LEN for __mempcpy_chk).
If LEN is not constant, try MAXLEN too. For MAXLEN only allow optimizing into non_ocs function if SIZE is >= MAXLEN, never convert to __ocs_fail ().
(void) __mempcpy_chk () can be optimized into (void) __memcpy_chk ().
If __builtin_mem{cpy,pcpy,move,set}_chk is used, assume mem{cpy,pcpy,move,set} is available.
bool fold_builtin_next_arg  (  tree  , 
bool  
) 
tree fold_builtin_snprintf_chk  (  location_t  loc, 
tree  exp,  
tree  maxlen,  
enum built_in_function  fcode  
) 
Fold a call EXP to {,v}snprintf. Return NULL_TREE if a normal call should be emitted rather than expanding the function inline. FCODE is either BUILT_IN_SNPRINTF_CHK or BUILT_IN_VSNPRINTF_CHK. If MAXLEN is not NULL, it is maximum length passed as second argument.
References real_format::p, real_isfinite(), and real_format::round_towards_zero.
Fold function call to builtin strncpy with arguments DEST, SRC, and LEN. If SLEN is not NULL, it represents the length of the source string. Return NULL_TREE if no simplification can be made.
If the LEN parameter is zero, return DEST.
We can't compare slen with len as constants below if len is not a constant.
Now, we must be passed a constant src ptr parameter.
We do not support simplification of this case, though we do support it when expanding trees into RTL.
FIXME: generate a call to __builtin_memset.
OK transform into builtin memcpy.
References build_int_cst(), build_real(), fold_convert_loc(), real_inf(), real_isinf(), real_isneg(), rvc_inf, rvc_nan, rvc_normal, rvc_zero, and validate_arg().
tree fold_builtin_stxcpy_chk  (  location_t  loc, 
tree  fndecl,  
tree  dest,  
tree  src,  
tree  size,  
tree  maxlen,  
bool  ignore,  
enum built_in_function  fcode  
) 
Fold a call to the __st[rp]cpy_chk builtin. DEST, SRC, and SIZE are the arguments to the call. IGNORE is true if return value can be ignored. FCODE is the BUILT_IN_* code of the builtin. If MAXLEN is not NULL, it is maximum length of strings passed as second argument.
If SRC and DEST are the same (and not volatile), return DEST.
If LEN is not constant, try MAXLEN too. For MAXLEN only allow optimizing into non_ocs function if SIZE is >= MAXLEN, never convert to __ocs_fail ().
If return value of __stpcpy_chk is ignored, optimize into __strcpy_chk.
If c_strlen returned something, but not a constant, transform __strcpy_chk into __memcpy_chk.
If __builtin_st{r,p}cpy_chk is used, assume st{r,p}cpy is available.
tree fold_builtin_stxncpy_chk  (  location_t  loc, 
tree  dest,  
tree  src,  
tree  len,  
tree  size,  
tree  maxlen,  
bool  ignore,  
enum built_in_function  fcode  
) 
Fold a call to the __st{r,p}ncpy_chk builtin. DEST, SRC, LEN, and SIZE are the arguments to the call. If MAXLEN is not NULL, it is maximum length passed as third argument. IGNORE is true if return value can be ignored. FCODE is the BUILT_IN_* code of the builtin.
If return value of __stpncpy_chk is ignored, optimize into __strncpy_chk.
If LEN is not constant, try MAXLEN too. For MAXLEN only allow optimizing into non_ocs function if SIZE is >= MAXLEN, never convert to __ocs_fail ().
If __builtin_st{r,p}ncpy_chk is used, assume st{r,p}ncpy is available.
tree fold_call_expr  (  location_t  , 
tree  ,  
bool  
) 
tree fold_convert_loc  (  location_t  , 
tree  ,  
tree  
) 
bool fold_convertible_p  (  const_tree  , 
const_tree  
) 
void fold_defer_overflow_warnings  (  void  ) 
Start deferring overflow warnings. We could use a stack here to permit nested calls, but at present it is not necessary.
References fold_deferring_overflow_warnings.
Referenced by bit_value_binop(), create_bb(), and rhs_to_tree().
bool fold_deferring_overflow_warnings_p  (  void  ) 
Whether we are deferring overflow warnings.
Fold a fma operation with arguments ARG[012].
Referenced by fold_builtin_strcpy().
tree fold_indirect_ref_1  (  location_t  , 
tree  ,  
tree  
) 
tree fold_indirect_ref_loc  (  location_t  , 
tree  
) 
bool fold_real_zero_addition_p  (  const_tree  , 
const_tree  ,  
int  
) 
tree fold_single_bit_test  (  location_t  loc, 
enum tree_code  code,  
tree  arg0,  
tree  arg1,  
tree  result_type  
) 
If CODE with arguments ARG0 and ARG1 represents a single bit equality/inequality test, then return a simplified form of the test using shifts and logical operations. Otherwise return NULL. TYPE is the desired result type.
If this is testing a single bit, we can optimize the test.
First, see if we can fold the single bit test into a signbit test.
Otherwise we have (A & C) != 0 where C is a single bit, convert that into ((A >> C2) & 1). Where C2 = log2(C). Similarly for (A & C) == 0.
If INNER is a right shift of a constant and it plus BITNUM does not overflow, adjust BITNUM and INNER.
If we are going to be able to omit the AND below, we must do our operations as unsigned. If we must use the AND, we have a choice. Normally unsigned is faster, but for some machines signed is.
Put the AND last so it can combine with more things.
Make sure to return the proper type.
tree fold_ternary_loc  (  location_t  loc, 
enum tree_code  code,  
tree  type,  
tree  op0,  
tree  op1,  
tree  op2  
) 
Fold a ternary expression of code CODE and type TYPE with operands OP0, OP1, and OP2. Return the folded expression if folding is successful. Otherwise, return NULL_TREE.
Strip any conversions that don't change the mode. This is safe for every expression, except for a comparison expression because its signedness is derived from its operands. So, in the latter case, only strip conversions that don't change the signedness. Note that this is done as an internal manipulation within the constant folder, in order to find the simplest representation of the arguments so that their form can be studied. In any cases, the appropriate type conversions should be put back in the tree that will get out of the constant folder.
Pedantic ANSI C says that a conditional expression is never an lvalue, so all simple results must be passed through pedantic_non_lvalue.
Only optimize constant conditions when the selected branch has the same type as the COND_EXPR. This avoids optimizing away "c ? x : throw", where the throw has a void type. Avoid throwing away that operand which contains label.
If we have A op B ? A : C, we may be able to convert this to a simpler expression, depending on the operation and the values of B and C. Signed zeros prevent all of these transformations, for reasons given above each one. Also try swapping the arguments and inverting the conditional.
If the second operand is simpler than the third, swap them since that produces better jump optimization results.
See if this can be inverted. If it can't, possibly because it was a floatingpoint inequality comparison, don't do anything.
Convert A ? 1 : 0 to simply A.
If we try to convert OP0 to our type, the call to fold will try to move the conversion inside a COND, which will recurse. In that case, the COND_EXPR is probably the best choice, so leave it alone.
Convert A ? 0 : 1 to !A. This prefers the use of NOT_EXPR over COND_EXPR in cases such as floating point comparisons.
A < 0 ? <sign bit of A> : 0 is simply (A & <sign bit of A>).
sign_bit_p looks through both zero and sign extensions, but for this optimization only sign extensions are usable.
sign_bit_p only checks ARG1 bits within A's precision. If <sign bit of A> has wider type than A, bits outside of A's precision in <sign bit of A> need to be checked. If they are all 0, this optimization needs to be done in unsigned A's type, if they are all 1 in signed A's type, otherwise this can't be done.
(A >> N) & 1 ? (1 << N) : 0 is simply A & (1 << N). A & 1 was already handled above.
A & N ? N : 0 is simply A & N if N is a power of two. This is probably obsolete because the first operand should be a truth value (that's why we have the two cases above), but let's leave it in until we can confirm this for all frontends.
Disable the transformations below for vectors, since fold_binary_op_with_conditional_arg may undo them immediately, yielding an infinite loop.
Convert A ? B : 0 into A && B if A and B are truth values.
Convert A ? B : 1 into !A  B if A and B are truth values.
Only perform transformation if ARG0 is easily inverted.
Convert A ? 0 : B into !A && B if A and B are truth values.
Only perform transformation if ARG0 is easily inverted.
Convert A ? 1 : B into A  B if A and B are truth values.
CALL_EXPRs used to be ternary exprs. Catch any mistaken uses of fold_ternary on them.
Constructor elements can be subvectors.
We keep an exact subset of the constructor elements.
The bitfield references a single constructor element.
A bitfieldref that referenced the full argument can be stripped.
On constants we can use native encode/interpret to constant fold (nearly) all BIT_FIELD_REFs.
This limitation should not be necessary, we just need to round this up to mode size.
Need bitshifting of the buffer to relax the following.
??? We cannot tell native_encode_expr to start at some random byte only. So limit us to a reasonable amount of work.
For integers we can decompose the FMA if possible.
tree fold_unary_ignore_overflow_loc  (  location_t  loc, 
enum tree_code  code,  
tree  type,  
tree  op0  
) 
If the operation was a conversion do _not_ mark a resulting constant with TREE_OVERFLOW if the original constant was not. These conversions have implementation defined behavior and retaining the TREE_OVERFLOW flag here would confuse later passes such as VRP.
References get_inner_reference().
tree fold_unary_loc  (  location_t  , 
enum  tree_code,  
tree  ,  
tree  
) 
void fold_undefer_and_ignore_overflow_warnings  (  void  ) 
Stop deferring overflow warnings, ignoring any deferred warnings.
void fold_undefer_overflow_warnings  (  bool  , 
const_gimple  ,  
int  
) 
tree force_fit_type_double  (  tree  type, 
double_int  cst,  
int  overflowable,  
bool  overflowed  
) 
We force the double_int CST to the range of the type TYPE by sign or zero extending it. OVERFLOWABLE indicates if we are interested in overflow of the value, when >0 we are only interested in signed overflow, for <0 we are interested in any overflow. OVERFLOWED indicates whether overflow has already occurred. CONST_OVERFLOWED indicates whether constant overflow has already occurred. We force T's value to be within range of T's type (by setting to 0 or 1 all the bits outside the type's range). We set TREE_OVERFLOWED if, OVERFLOWED is nonzero, or OVERFLOWABLE is >0 and signed overflow occurs or OVERFLOWABLE is <0 and any overflow occurs We return a new tree node for the extended double_int. The node is shared if no overflow flags are set.
If we need to set overflow flags, return a new unshared node.
Else build a shared node.
Referenced by native_interpret_vector(), and tree_unary_nonnegative_warnv_p().
void free_temp_slots  (  void  ) 
Free all temporaries used so far. This is normally called at the end of generating code for a statement.
References initial_value_struct::entries, has_hard_reg_initial_val(), initial_value_struct::max_entries, and initial_value_struct::num_entries.
Referenced by expand_asm_stmt().

inlinestatic 
Return the next argument if there are more arguments to handle, otherwise return NULL.

inlinestatic 
Return a pointer that holds the next argument if there are more arguments to handle, otherwise return NULL.

inlinestatic 
Initialize the iterator I with arguments from function FNDECL

inlinestatic 
Advance to the next argument.
void generate_setjmp_warnings  (  void  ) 
Generate warning messages for variables live across setjmp.
References get_block_vector(), SDB_DEBUG, and XCOFF_DEBUG.
Referenced by split_live_ranges_for_shrink_wrap().
tree get_attribute_name  (  const_tree  ) 
tree get_attribute_namespace  (  const_tree  ) 
tree get_binfo_at_offset  (  tree  , 
HOST_WIDE_INT  ,  
tree  
) 
tree get_callee_fndecl  (  const_tree  ) 
tree get_containing_scope  (  const_tree  ) 
Given a DECL or TYPE, return the scope in which it was declared, or NUL_TREE if there is no containing scope.
tree get_file_function_name  (  const char *  ) 
tree get_identifier  (  const char *  ) 
Return the (unique) IDENTIFIER_NODE node for a given name. The name is supplied as a char *.
tree get_identifier_with_length  (  const char *  , 
size_t  
) 
Identical to get_identifier, except that the length is assumed known.
tree get_inner_reference  (  tree  exp, 
HOST_WIDE_INT *  pbitsize,  
HOST_WIDE_INT *  pbitpos,  
tree *  poffset,  
enum machine_mode *  pmode,  
int *  punsignedp,  
int *  pvolatilep,  
bool  keep_aligning  
) 
Given an expression EXP that is a handled_component_p, look for the ultimate containing object, which is returned and specify the access position and size.
Given an expression EXP that may be a COMPONENT_REF, a BIT_FIELD_REF, an ARRAY_REF, or an ARRAY_RANGE_REF, look for nested operations of these codes and find the ultimate containing object, which we return. We set *PBITSIZE to the size in bits that we want, *PBITPOS to the bit position, and *PUNSIGNEDP to the signedness of the field. If the position of the field is variable, we store a tree giving the variable offset (in units) in *POFFSET. This offset is in addition to the bit position. If the position is not variable, we store 0 in *POFFSET. If any of the extraction expressions is volatile, we store 1 in *PVOLATILEP. Otherwise we don't change that. If the field is a nonBLKmode bitfield, *PMODE is set to VOIDmode. Otherwise, it is a mode that can be used to access the field. If the field describes a variablesized object, *PMODE is set to BLKmode and *PBITSIZE is set to 1. An access cannot be made in this case, but the address of the object can be found. If KEEP_ALIGNING is true and the target is STRICT_ALIGNMENT, we don't look through nodes that serve as markers of a greater alignment than the one that can be deduced from the expression. These nodes make it possible for frontends to prevent temporaries from being created by the middleend on alignment considerations. For that purpose, the normal operating mode at highlevel is to always pass FALSE so that the ultimate containing object is really returned; moreover, the associated predicate handled_component_p will always return TRUE on these nodes, thus indicating that they are essentially handled by get_inner_reference. TRUE should only be passed when the caller is scanning the expression in order to build another representation and specifically knows how to handle these nodes; as such, this is the normal operating mode in the RTL expanders.
First get the mode, signedness, and size. We do this from just the outermost expression.
Volatile bitfields should be accessed in the mode of the field's type, not the mode computed based on the bit size.
For vector types, with the correct size of access, use the mode of inner type.
Compute cumulative bitoffset for nested componentrefs and arrayrefs, and find the ultimate containing object.
If this field hasn't been filled in yet, don't go past it. This should only happen when folding expressions made during type construction.
??? Right now we don't do anything with DECL_OFFSET_ALIGN.
We assume all arrays have sizes that are a multiple of a byte. First subtract the lower bound, if any, in the type of the index, then convert to sizetype and multiply by the size of the array element.
Hand back the decl for MEM[&decl, off].
If any reference in the chain is volatile, the effect is volatile.
If OFFSET is constant, see if we can return the whole thing as a constant bit position. Make sure to handle overflow during this conversion.
Otherwise, split it up.
Avoid returning a negative bitpos as this may wreak havoc later.
TEM is the bitpos rounded to BITS_PER_UNIT towards Inf. Subtract it to BIT_OFFSET and add it (scaled) to OFFSET.
We can use BLKmode for a bytealigned BLKmode bitfield.
References emit_move_insn(), and gen_reg_rtx().
Referenced by delegitimize_mem_from_attrs(), fold_unary_ignore_overflow_loc(), invert_truthvalue_loc(), native_interpret_real(), and tree_to_aff_combination().
const char* get_name  (  tree  ) 
Return OP or a simpler expression for a narrower value which can be signextended or zeroextended to give back OP. Store in *UNSIGNEDP_PTR either 1 if the value should be zeroextended or 0 if the value should be signextended.
unsigned int get_object_alignment  (  tree  ) 
bool get_object_alignment_1  (  tree  exp, 
unsigned int *  alignp,  
unsigned HOST_WIDE_INT *  bitposp  
) 
For a memory reference expression EXP compute values M and N such that M divides (&EXP  N) and such that N < M. If these numbers can be determined, store M in alignp and N in *BITPOSP and return true. Otherwise return false and store BITS_PER_UNIT to *alignp and any bitoffset to *bitposp.
unsigned int get_pointer_alignment  (  tree  ) 
bool get_pointer_alignment_1  (  tree  exp, 
unsigned int *  alignp,  
unsigned HOST_WIDE_INT *  bitposp  
) 
For a pointer valued expression EXP compute values M and N such that M divides (EXP  N) and such that N < M. If these numbers can be determined, store M in alignp and N in *BITPOSP and return true. Return false if the results are just a conservative approximation. If EXP is not a pointer, false is returned too.
We cannot really tell whether this result is an approximation.
Return a version of the TYPE, qualified as indicated by the TYPE_QUALS, if one exists. If no qualified version exists yet, return NULL_TREE.
const char* get_tree_code_name  (  enum  tree_code  ) 
void get_type_static_bounds  (  const_tree  , 
mpz_t  ,  
mpz_t  
) 
Return EXP, stripped of any conversions to wider types in such a way that the result of converting to type FOR_TYPE is the same as if EXP were converted to FOR_TYPE. If FOR_TYPE is 0, it signifies EXP's type.
bool gimple_alloca_call_p  (  const_gimple  ) 
tree gimple_fold_builtin_snprintf_chk  (  gimple  stmt, 
tree  maxlen,  
enum built_in_function  fcode  
) 
Fold a call STMT to {,v}snprintf. Return NULL_TREE if a normal call should be emitted rather than expanding the function inline. FCODE is either BUILT_IN_SNPRINTF_CHK or BUILT_IN_VSNPRINTF_CHK. If MAXLEN is not NULL, it is maximum length passed as second argument.
gimple_seq gimplify_parameters  (  void  ) 
Gimplify the parameter list for current_function_decl. This involves evaluating SAVE_EXPRs of variable sized parameters and generating code to implement calleecopies reference parameters. Returns a sequence of statements to add to the beginning of the function.
Extract the type of PARM; adjust it according to ABI.
Early out for errors and void parameters.
Update info on where next arg arrives in registers.
??? Once upon a time variable_size stuffed parameter list SAVE_EXPRs (amongst others) onto a pending sizes list. This turned out to be less than manageable in the gimple world. Now we have to hunt them down ourselves.
For constantsized objects, this is trivial; for variablesized objects, we have to play games.
If PARM was addressable, move that flag over to the local copy, as its address will be taken, not the PARMs. Keep the parms address taken as we'll query that flag during gimplification.
The call has been built for a variablesized object.
Handle a "dllimport" or "dllexport" attribute.
Handle a "dllimport" or "dllexport" attribute; arguments as in struct attribute_spec.handler.
These attributes may apply to structure and union types being created, but otherwise should pass to the declaration involved.
Report error on dllimport ambiguities seen now before they cause any damage.
Honor any targetspecific overrides.
Like MS, treat definition of dllimported variables and noninlined functions on declaration as syntax errors.
`extern' needn't be specified with dllimport. Specify `extern' now and hope for the best. Sigh.
Also, implicitly give dllimport'd variables declared within a function global scope, unless declared static.
An exported function, even if inline, must be emitted.
Report error if symbol is not accessible at global scope.
A dllexport'd entity must have default visibility so that other program units (shared libraries or the main executable) can see it. A dllimport'd entity must have default visibility so that the linker knows that undefined references within this program unit can be resolved by the dynamic linker.

inlinestatic 
Return true if T is an expression that get_inner_reference handles.
Referenced by alias_sets_must_conflict_p(), compute_subscript_distance(), constant_after_peeling(), dump_gimple_statistics(), generic_expr_could_trap_p(), ipa_prop_write_all_agg_replacement(), is_gimple_variable(), loop_has_blocks_with_irreducible_flag(), same_type_for_tbaa(), ssa_prop_fini(), sub_costs(), and verify_phi_args().
unsigned HOST_WIDE_INT highest_pow2_factor  (  const_tree  ) 
int host_integerp  (  const_tree  , 
int  
) 
bool in_array_bounds_p  (  tree  ) 
void indent_to  (  FILE *  , 
int  
) 
void init_attributes  (  void  ) 
Initialize attribute tables, and make some sanity checks if enablechecking.
Translate NULL pointers to pointers to the empty table.
Make some sanity checks on the attribute tables.
The name must not begin and end with __.
The minimum and maximum lengths must be consistent.
An attribute cannot require both a DECL and a TYPE.
If an attribute requires a function type, in particular it requires a type.
Check that each name occurs just once in each table.
Check that no name occurs in more than one table. Names that begin with '*' are exempt, and may be overridden.
Put all the GNU attributes into the "gnu" namespace.

inlinestatic 
Initialize the abstract argument list iterator object ITER with the arguments from CALL_EXPR node EXP.

inlinestatic 
void init_dummy_function_start  (  void  ) 
Initialize the rtl expansion mechanism so that we can do simple things like generate sequences. This is used to provide a context during global initialization of some passes. You must call expand_dummy_function_end to exit this context.
References current_function_decl, diddle_return_value(), and do_clobber_return_reg().
void init_function_start  (  tree  ) 
void init_inline_once  (  void  ) 
In treeinline.c.
Initializes weights used by estimate_num_insns.
Estimating time for call is difficult, since we have no idea what the called function does. In the current uses of eni_time_weights, underestimating the cost does less harm than overestimating it, so we choose a rather small value here.
References copy_body_data::block.
void init_object_sizes  (  void  ) 
In treeobjectsize.c.
Initialize data structures for the object size computation.
References BUILT_IN_NORMAL, gimple_call_fndecl(), gsi_end_p(), gsi_next(), gsi_start_bb(), gsi_stmt(), and init_object_sizes().
Referenced by init_object_sizes().
void init_temp_slots  (  void  ) 
Initialize temporary slots.
We have not allocated any temporaries yet.
Set up the table to map addresses to temp slots.
Referenced by blocks_nreverse().
void init_tree_optimization_optabs  (  tree  ) 
void init_ttree  (  void  ) 
Init tree.c.
Initialize the hash table of types.
Initialize the tree_contains_struct array.
void initialize_sizetypes  (  void  ) 
Initialize sizetypes so layout_type can use them.
Get sizetypes precision from the SIZE_TYPE target macro.
Create stubs for sizetype and bitsizetype so we can create constants.
Now layout both types manually.
Create the signed variants of *sizetype.
bool initializer_constant_valid_for_bitfield_p  (  tree  ) 
Return true if VALUE is a valid constantvalued expression for use in initializing a static bitfield; one that can be an element of a "constant" initializer.
Return nonzero if VALUE is a valid constantvalued expression for use in initializing a static variable; one that can be an element of a "constant" initializer. Return null_pointer_node if the value is absolute; if it is relocatable, return the variable that determines the relocation. We assume that VALUE has been folded as much as possible; therefore, we do not need to check for such things as arithmeticcombinations of integers.
Referenced by insert_float(), and optimize_compound_literals_in_ctor().
bool initializer_zerop  (  const_tree  ) 
Given an initializer INIT, return TRUE if INIT is zero or some aggregate of zeros. Otherwise return FALSE.

inlinestatic 
We set BLOCK_SOURCE_LOCATION only to inlined function entry points.
Referenced by premark_types_used_by_global_vars().
HOST_WIDE_INT int_bit_position  (  const_tree  ) 
HOST_WIDE_INT int_byte_position  (  const_tree  ) 
tree int_const_binop  (  enum  tree_code, 
const_tree  ,  
const_tree  
) 
HOST_WIDE_INT int_cst_value  (  const_tree  ) 
bool int_fits_type_p  (  const_tree  , 
const_tree  
) 
HOST_WIDE_INT int_size_in_bytes  (  const_tree  ) 
int integer_all_onesp  (  const_tree  ) 
integer_all_onesp (tree x) is nonzero if X is an integer constant all of whose significant bits are 1.
int integer_minus_onep  (  const_tree  ) 
integer_minus_onep (tree x) is nonzero if X is an integer constant of value 1.
int integer_nonzerop  (  const_tree  ) 
integer_nonzerop (tree x) is nonzero if X is an integer constant with a nonzero value.
int integer_onep  (  const_tree  ) 
integer_onep (tree x) is nonzero if X is an integer constant of value 1.
int integer_pow2p  (  const_tree  ) 
integer_pow2p (tree x) is nonzero is X is an integer constant with exactly one bit 1.
int integer_zerop  (  const_tree  ) 
integer_zerop (tree x) is nonzero if X is an integer constant of value 0.
void internal_reference_types  (  void  ) 
Show that REFERENCE_TYPES are internal and should use address_mode. Called only by front end.
enum tree_code invert_tree_comparison  (  enum  tree_code, 
bool  
) 
tree invert_truthvalue_loc  (  location_t  , 
tree  
) 

inlinestatic 
Given an identifier node IDENT and a string ATTR_NAME, return true if the identifier node is a valid attribute name for the string. ATTR_NAME must be in the form 'text' (not '__text__'). IDENT could be the identifier for 'text' or for '__text__'.
Do the strlen() before calling the outofline implementation. In most cases attr_name is a string constant, and the compiler will optimize the strlen() away.
Referenced by build_fn_decl().
bool is_builtin_fn  (  tree  ) 

inlinestatic 
Return true if T (assumed to be a DECL) is a global variable. A variable is considered global if its storage is not automatic.
Referenced by lower_reduction_clauses(), make_constraint_from(), non_rewritable_mem_ref_base(), omp_copy_decl_2(), omp_is_private(), omp_max_vf(), and warn_uninitialized_vars().
bool is_inexpensive_builtin  (  tree  ) 

inlinestatic 
Return true if tree node T is a languagespecific node.
bool is_simple_builtin  (  tree  ) 
bool is_tm_ending_fndecl  (  tree  ) 
bool is_tm_may_cancel_outer  (  tree  ) 
bool is_tm_pure  (  const_tree  ) 
bool is_tm_safe  (  const_tree  ) 

inlinestatic 
bool is_typedef_decl  (  tree  x  ) 
hashval_t iterative_hash_expr  (  const_tree  , 
hashval_t  
) 
hashval_t iterative_hash_exprs_commutative  (  const_tree  t1, 
const_tree  t2,  
hashval_t  val  
) 
Generate a hash value for a pair of expressions. This can be used iteratively by passing a previous result as the VAL argument. The same hash value is always returned for a given pair of expressions, regardless of the order in which they are presented. This is useful in hashing the operands of commutative functions.
References chainon(), and nreverse().
hashval_t iterative_hash_hashval_t  (  hashval_t  , 
hashval_t  
) 
hashval_t iterative_hash_host_wide_int  (  HOST_WIDE_INT  , 
hashval_t  
) 
void layout_decl  (  tree  , 
unsigned  
) 
Given a VAR_DECL, PARM_DECL, RESULT_DECL or FIELD_DECL node, calculates the DECL_SIZE, DECL_SIZE_UNIT, DECL_ALIGN and DECL_MODE fields. Call this only once for any given decl node. Second argument is the boundary that this field can be assumed to be starting at (in bits). Zero means it can be assumed aligned on any boundary that may be needed.
void layout_type  (  tree  ) 
Given a ..._TYPE node, calculate the TYPE_SIZE, TYPE_SIZE_UNIT, TYPE_ALIGN and TYPE_MODE fields. If called more than once on one node, does nothing except for the first time.
tree lhd_gcc_personality  (  void  ) 
Return the GCC personality function decl.
bool list_equal_p  (  const_tree  , 
const_tree  
) 
int list_length  (  const_tree  ) 
Returns the length of a chain of nodes (number of chain pointers to follow before reaching a null pointer).

inlinestatic 
Given an attribute name ATTR_NAME and a list of attributes LIST, return a pointer to the attribute's list element if the attribute is part of the list, or NULL_TREE if not found. If the attribute appears more than once, this only returns the first occurrence; the TREE_CHAIN of the return value should be passed back in if further occurrences are wanted. ATTR_NAME must be in the form 'text' (not '__text__').
In most cases, list is NULL_TREE.
Do the strlen() before calling the outofline implementation. In most cases attr_name is a string constant, and the compiler will optimize the strlen() away.
Referenced by apply_return_prediction(), build_decl_attribute_variant(), build_fn_decl(), cgraph_create_empty_node(), decl_attributes(), dump_possible_polymorphic_call_targets(), iterative_hash_hashval_t(), move_insn_for_shrink_wrap(), output_constructor_bitfield(), process_common_attributes(), simple_cst_equal(), stmt_overflow_infinity(), and varpool_output_variables().

read 
In attribs.c.

read 
tree make_accum_type  (  int  , 
int  ,  
int  
) 
void make_decl_rtl  (  tree  ) 
tree make_fract_type  (  int  , 
int  ,  
int  
) 
Construct various nodes representing fract or accum data types.
Lowest level primitive for allocating a node. The TREE_CODE is the only argument. Contents are initialized to zero except for a few of the common fields.
Given EXP, a logical expression, set the range it is testing into variables denoted by PIN_P, PLOW, and PHIGH. Return the expression actually being tested. *PLOW and *PHIGH will be made of the same type as the returned expression. If EXP is not a comparison, we will most likely not be returning a useful value and range. Set *STRICT_OVERFLOW_P to true if the return value is only valid because signed overflow is undefined; otherwise, do not change *STRICT_OVERFLOW_P.
Start with simply saying "EXP != 0" and then look at the code of EXP and see if we can refine the range. Some of the cases below may not happen, but it doesn't seem worth worrying about this. We "continue" the outer loop when we've changed something; otherwise we "break" the switch, which will "break" the while.
If EXP is a constant, we can evaluate whether this is true or false.
tree make_range_step  (  location_t  loc, 
enum tree_code  code,  
tree  arg0,  
tree  arg1,  
tree  exp_type,  
tree *  p_low,  
tree *  p_high,  
int *  p_in_p,  
bool *  strict_overflow_p  
) 
Helper routine for make_range. Perform one step for it, return new expression if the loop should continue or NULL_TREE if it should stop.
We can only do something if the range is testing for zero.
We can only do something if the range is testing for zero and if the second operand is an integer constant. Note that saying something is "in" the range we make is done by complementing IN_P since it will set in the initial case of being not equal to zero; "out" is leaving it alone.
If this is an unsigned comparison, we also know that EXP is greater than or equal to zero. We base the range tests we make on that fact, so we record it here so we can parse existing range tests. We test arg0_type since often the return type of, e.g. EQ_EXPR, is boolean.
If the high bound is missing, but we have a nonzero low bound, reverse the range so it goes from zero to the low bound minus 1.
If flag_wrapv and ARG0_TYPE is signed, make sure low and high are nonNULL, then normalize will DTRT.
(x) IN [a,b] > x in [b, a]
~ X > X  1
If flag_wrapv and ARG0_TYPE is signed, then we cannot move a constant to the other side.
If EXP is signed, any overflow in the computation is undefined, so we don't worry about it so long as our computations on the bounds don't overflow. For unsigned, overflow is defined and this is exactly the right thing.
Check for an unsigned range which has wrapped around the maximum value thus making n_high < n_low, and normalize it.
If the range is of the form +/ [ x+1, x ], we won't be able to normalize it. But then, it represents the whole range or the empty set, so make it +/ [ ,  ].
If we're converting arg0 from an unsigned type, to exp, a signed type, we will be doing the comparison as unsigned. The tests above have already verified that LOW and HIGH are both positive. So we have to ensure that we will handle large unsigned values the same way that the current signed bounds treat negative values.
For fixedpoint modes, we need to pass the saturating flag as the 2nd parameter.
A range without an upper bound is, naturally, unbounded. Since convert would have cropped a very large value, use the max value for the destination type.
If the low bound is specified, "and" the range with the range for which the original unsigned value will be positive.
Otherwise, "or" the range with the range of the input that will be interpreted as negative.
tree make_signed_type  (  int  ) 
Construct various nodes representing data types.
From expmed.c. Since rtl.h is included after tree.h, we can't put the prototype here. Rtl.h does declare the prototype if tree.h had been included.
tree make_tree_binfo_stat  (  unsigned  MEM_STAT_DECL  ) 
Make a BINFO.
tree make_tree_vec_stat  (  int  MEM_STAT_DECL  ) 
Make a TREE_VEC.
tree make_unsigned_type  (  int  ) 
tree make_vector_stat  (  unsigned  MEM_STAT_DECL  ) 
void mark_addressable  (  tree  ) 
void mark_decl_referenced  (  tree  ) 
void mark_referenced  (  tree  ) 
Referenced by assemble_external_real().
tree mathfn_built_in  (  tree  , 
enum built_in_function  fn  
) 
HOST_WIDE_INT max_int_size_in_bytes  (  const_tree  ) 

inlinestatic 
Return true if VAR may be aliased. A variable is considered as maybe aliased if it has its address taken by the local TU or possibly by another TU and might be modified through a pointer.
bool may_negate_without_overflow_p  (  const_tree  ) 
tree maybe_get_identifier  (  const char *  ) 
If an identifier with the name TEXT (a nullterminated string) has previously been referred to, return that node; otherwise return NULL_TREE.
double_int mem_ref_offset  (  const_tree  ) 
Given two Windows decl attributes lists, possibly including dllimport, return a list of their union .
bool merge_ranges  (  int *  pin_p, 
tree *  plow,  
tree *  phigh,  
int  in0_p,  
tree  low0,  
tree  high0,  
int  in1_p,  
tree  low1,  
tree  high1  
) 
Given two ranges, see if we can merge them into one. Return 1 if we can, 0 if we can't. Set the output range into the specified parameters.
Make range 0 be the range that starts first, or ends last if they start at the same value. Swap them if it isn't.
Now flag two cases, whether the ranges are disjoint or whether the second range is totally subsumed in the first. Note that the tests below are simplified by the ones above.
We now have four cases, depending on whether we are including or excluding the two ranges.
If they don't overlap, the result is false. If the second range is a subset it is the result. Otherwise, the range is from the start of the second to the end of the first.
If they don't overlap, the result is the first range. If they are equal, the result is false. If the second range is a subset of the first, and the ranges begin at the same place, we go from just after the end of the second range to the end of the first. If the second range is not a subset of the first, or if it is a subset and both ranges end at the same place, the range starts at the start of the first range and ends just before the second range. Otherwise, we can't describe this as a single range.
We are in the weird situation where high0 > high1 but high1 has no successor. Punt.
low0 < low1 but low1 has no predecessor. Punt.
If they don't overlap, the result is the second range. If the second is a subset of the first, the result is false. Otherwise, the range starts just after the first range and ends at the end of the second.
high1 > high0 but high0 has no successor. Punt.
The case where we are excluding both ranges. Here the complex case is if they don't overlap. In that case, the only time we have a range is if they are adjacent. If the second is a subset of the first, the result is the first. Otherwise, the range to exclude starts at the beginning of the first range and ends at the end of the second.
Canonicalize  [min, x] into  [, x].
FALLTHROUGH
Canonicalize  [x, max] into  [x, ].
FALLTHROUGH
The ranges might be also adjacent between the maximum and minimum values of the given type. For  [{min,}, x] and  [y, {max,}] ranges where x + 1 < y return + [x + 1, y  1].
References fold_convert_loc(), negate_expr(), pedantic_non_lvalue_loc(), signed_type_for(), and tcc_comparison.
Referenced by optimize_range_tests_diff(), and sign_bit_p().
enum machine_mode mode_for_size_tree  (  const_tree  , 
enum  mode_class,  
int  
) 
Return the mode for data of a given size SIZE and mode class CLASS. If LIMIT is nonzero, then don't use modes bigger than MAX_FIXED_MODE_SIZE. The value is BLKmode if no other mode is found. This is like mode_for_size, but is passed a tree.

inlinestatic 
Test whether there are more arguments in abstract argument list iterator ITER, without changing its state.

inlinestatic 
int multiple_of_p  (  tree  , 
const_tree  ,  
const_tree  
) 
bool must_pass_in_stack_var_size  (  enum machine_mode  mode, 
const_tree  type  
) 
Nonzero if we do not know how to pass TYPE solely in registers.
If the type has variable size...
If the type is marked as addressable (it is required to be constructed into the stack)...
bool must_pass_in_stack_var_size_or_pad  (  enum  machine_mode, 
const_tree  
) 
int native_encode_expr  (  const_tree  , 
unsigned char *  ,  
int  
) 
Convert between trees and native memory representation.
bool needs_to_live_in_memory  (  const_tree  ) 

inlinestatic 
Return the next argument from abstract argument list iterator object ITER, and advance its state. Return NULL_TREE if there are no more arguments.
Referenced by delete_unreachable_blocks_update_callgraph().

inlinestatic 
tree non_lvalue_loc  (  location_t  , 
tree  
) 

inline 
These checks have to be special cased.
void normalize_rli  (  record_layout_info  ) 
void notice_global_symbol  (  tree  ) 
Referenced by decide_function_section().
tree num_ending_zeros  (  const_tree  ) 
tree omit_one_operand_loc  (  location_t  , 
tree  ,  
tree  ,  
tree  
) 
Return a tree for the case when the result of an expression is RESULT converted to TYPE and OMITTED1 and OMITTED2 were previously operands of the expression but are now not needed. If OMITTED1 or OMITTED2 has side effects, they must be evaluated. If both OMITTED1 and OMITTED2 have side effects, OMITTED1 is evaluated before OMITTED2. Otherwise, if neither has side effects, just do the conversion of RESULT to TYPE.
Referenced by build_call_expr().
void omp_clause_check_failed  (  const_tree  node, 
const char *  file,  
int  line,  
const char *  function,  
enum omp_clause_code  code  
) 
Similar to tree_check_failed but applied to OMP_CLAUSE codes.

inline 
void omp_clause_operand_check_failed  (  int  idx, 
const_tree  t,  
const char *  file,  
int  line,  
const char *  function  
) 
Similar to above, except that the check is for the number of operands of an OMP_CLAUSE node.
References targetm.

inline 
void omp_clause_range_check_failed  (  const_tree  node, 
const char *  file,  
int  line,  
const char *  function,  
enum omp_clause_code  c1,  
enum omp_clause_code  c2  
) 
Similar to tree_range_check_failed but applied to OMP_CLAUSE codes.

inline 
void omp_remove_redundant_declare_simd_attrs  (  tree  ) 
Remove redundant "omp declare simd" attributes from fndecl.
int operand_equal_for_phi_arg_p  (  const_tree  , 
const_tree  
) 
int operand_equal_p  (  const_tree  , 
const_tree  ,  
unsigned  int  
) 
bool parse_input_constraint  (  const char **  constraint_p, 
int  input_num,  
int  ninputs,  
int  noutputs,  
int  ninout,  
const char *const *  constraints,  
bool *  allows_mem,  
bool *  allows_reg  
) 
Similar, but for input constraints.
Assume the constraint doesn't allow the use of either a register or memory.
Make sure constraint has neither `=', `+', nor '&'.
Whether or not a numeric constraint allows a register is decided by the matching constraint, and so there is no need to do anything special with them. We must handle them in the default case, so that we don't unnecessarily force operands to memory.
Try and find the real constraint for this dup. Only do this if the matching constraint is the only alternative.
??? At the end of the loop, we will skip the first part of the matched constraint. This assumes not only that the other constraint is an output constraint, but also that the '=' or '+' come first.
Anticipate increment at end of loop.
Fall through.
Otherwise we can't assume anything about the nature of the constraint except that it isn't purely registers. Treat it like "g" and hope for the best.
References error().
bool parse_output_constraint  (  const char **  constraint_p, 
int  operand_num,  
int  ninputs,  
int  noutputs,  
bool *  allows_mem,  
bool *  allows_reg,  
bool *  is_inout  
) 
Parse the output constraint pointed to by *CONSTRAINT_P. It is the OPERAND_NUMth output operand, indexed from zero. There are NINPUTS inputs and NOUTPUTS outputs to this extendedasm. Upon return, *ALLOWS_MEM will be TRUE iff the constraint allows the use of a memory operand. Similarly, *ALLOWS_REG will be TRUE iff the constraint allows the use of a register operand. And, *IS_INOUT will be true if the operand is readwrite, i.e., if it is used as an input as well as an output. If *CONSTRAINT_P is not in canonical form, it will be made canonical. (Note that `+' will be replaced with `=' as part of this process.) Returns TRUE if all went well; FALSE if an error occurred.
Assume the constraint doesn't allow the use of either a register or memory.
Allow the `=' or `+' to not be at the beginning of the string, since it wasn't explicitly documented that way, and there is a large body of code that puts it last. Swap the character to the front, so as not to uglify any place else.
If the string doesn't contain an `=', issue an error message.
If the constraint begins with `+', then the operand is both read from and written to.
Canonicalize the output constraint so that it begins with `='.
Make a copy of the constraint.
Swap the first character and the `=' or `+'.
Make sure the first character is an `='. (Until we do this, it might be a `+'.)
Replace the constraint with the canonicalized string.
Loop through the constraint string.
??? Before flow, auto inc/dec insns are not supposed to exist, excepting those that expand_call created. So match memory and hope.
Otherwise we can't assume anything about the nature of the constraint except that it isn't purely registers. Treat it like "g" and hope for the best.
void phi_node_elt_check_failed  (  int  , 
int  ,  
const char *  ,  
int  ,  
const char *  
) 
void place_field  (  record_layout_info  , 
tree  
) 
void pop_function_context  (  void  ) 
Restore the last saved context, at the end of a nested function. This function is called from languagespecific code.
Reset variables that have known state during rtx generation.
void pop_temp_slots  (  void  ) 
Pop a temporary nesting level. All slots in use in the current level are freed.
Split the bit position POS into a byte offset *POFFSET and a bit position *PBITPOS with the byte offset aligned to OFF_ALIGN bits.
References normalize_offset().
void preserve_temp_slots  (  rtx  ) 
void print_node  (  FILE *  , 
const char *  ,  
tree  ,  
int  
) 
void print_node_brief  (  FILE *  , 
const char *  ,  
const_tree  ,  
int  
) 
void print_rtl  (  FILE *  , 
const_rtx  
) 
In printrtl.c
bool private_is_attribute_p  (  const char *  , 
size_t  ,  
const_tree  
) 
This function is a private implementation detail of is_attribute_p() and you should never call it directly.
This function is a private implementation detail of lookup_attribute() and you should never call it directly.
void process_pending_assemble_externals  (  void  ) 
void protected_set_expr_location  (  tree  , 
location_t  
) 
bool prototype_p  (  tree  ) 
bool ptr_difference_const  (  tree  , 
tree  ,  
HOST_WIDE_INT *  
) 

inlinestatic 
Return whether TYPE is a type suitable for an offset for a POINTER_PLUS_EXPR.
tree purpose_member  (  const_tree  , 
tree  
) 
void push_function_context  (  void  ) 
Save the current context for compilation of a nested function. This is called from languagespecific code.
References current_function_decl, function::decl, generating_concat_p, set_cfun(), and virtuals_instantiated.
void push_struct_function  (  tree  fndecl  ) 
void push_temp_slots  (  void  ) 
Push deeper into the nesting level for stack temporaries.
bool range_in_array_bounds_p  (  tree  ) 
int real_minus_onep  (  const_tree  ) 
int real_onep  (  const_tree  ) 
int real_twop  (  const_tree  ) 
int real_zerop  (  const_tree  ) 
Return 1 if EXPR is the real constant zero.
int really_constant_p  (  const_tree  ) 
In tree.c
void recompute_tree_invariant_for_addr_expr  (  tree  ) 
void relayout_decl  (  tree  ) 
Given a VAR_DECL, PARM_DECL or RESULT_DECL, clears the results of a previous call to layout_decl and calls it again.
Remove any instances of attribute ATTR_NAME in LIST and return the modified list. ATTR_NAME must be in the form 'text' (not '__text__').
void resolve_unique_section  (  tree  decl, 
int  reloc,  
int  flag_function_or_data_sections  
) 
If required, set DECL_SECTION_NAME to a unique name.
References get_named_section(), and text_section.
tree rli_size_so_far  (  record_layout_info  ) 
tree rli_size_unit_so_far  (  record_layout_info  ) 
tree round_down_loc  (  location_t  , 
tree  ,  
int  
) 
tree round_up_loc  (  location_t  , 
tree  ,  
int  
) 
save_expr (EXP) returns an expression equivalent to EXP but it can be used multiple times within context CTX and only evaluate EXP once.
void save_vtable_map_decl  (  tree  ) 
In vtableverify.c.

inlinestatic 
Set explicit builtin function nodes and whether it is an implicit function.

inlinestatic 
Set the implicit flag for a builtin function.
void set_builtin_user_assembler_name  (  tree  decl, 
const char *  asmspec  
) 
void set_call_expr_flags  (  tree  , 
int  
) 
void set_min_and_max_values_for_integral_type  (  tree  type, 
int  precision,  
bool  is_unsigned  
) 
In storlayout.c
TYPE is an integral type, i.e., an INTEGRAL_TYPE, ENUMERAL_TYPE or BOOLEAN_TYPE. Set TYPE_MIN_VALUE and TYPE_MAX_VALUE for TYPE, based on the PRECISION and whether or not the TYPE IS_UNSIGNED. PRECISION need not correspond to a width supported natively by the hardware; for example, on a machine with 8bit, 16bit, and 32bit register modes, PRECISION might be 7, 23, or 61.
For bitfields with zero width we end up creating integer types with zero precision. Don't assign any minimum/maximum values to those types, they don't have any valid value.
References bit_field_mode_iterator::next_mode().
Referenced by make_unsigned_type().
void set_user_assembler_name  (  tree  , 
const char *  
) 
Referenced by gen_int_to_fp_nondecimal_conv_libfunc().
int setjmp_call_p  (  const_tree  ) 
int simple_cst_equal  (  const_tree  , 
const_tree  
) 
int simple_cst_list_equal  (  const_tree  , 
const_tree  
) 
tree size_binop_loc  (  location_t  , 
enum  tree_code,  
tree  ,  
tree  
) 
tree size_diffop_loc  (  location_t  , 
tree  ,  
tree  
) 
tree size_in_bytes  (  const_tree  ) 
tree size_int_kind  (  HOST_WIDE_INT  , 
enum  size_type_kind  
) 
HOST_WIDE_INT size_low_cst  (  const_tree  ) 
Look inside EXPR into any simple arithmetic operations. Return the outermost nonarithmetic or noninvariant node.
Look inside EXPR into simple arithmetic operations involving constants. Return the outermost nonarithmetic or nonconstant node.
bool ssa_name_nonnegative_p  (  const_tree  ) 
In treevrp.c
stabilize_reference (EXP) returns a reference equivalent to EXP but it can be used multiple times and only evaluate the subexpressions once.
Subroutine of stabilize_reference; this is called for subtrees of references. Any expression with sideeffects must be put in a SAVE_EXPR to ensure that it is only evaluated once.
void stack_protect_epilogue  (  void  ) 
Allow the target to compare Y with X without leaking either into a register.
FALLTHRU
The noreturn predictor has been moved to the tree level. The rtllevel predictors estimate this branch about 20%, which isn't enough to get things moved out of line. Since this is the only extant case of adding a noreturn function at the rtl level, it doesn't seem worth doing ought except adding the prediction by hand.
void stack_protect_prologue  (  void  ) 
Allow the target to copy from Y to X without leaking Y into a register.
Otherwise do a straight move.
record_layout_info start_record_layout  (  tree  ) 
staticp (tree x) is nonzero if X is a reference to data allocated at a fixed address in memory. Returns the outermost data.
bool stdarg_p  (  const_tree  ) 
const_tree strip_invariant_refs  (  const_tree  ) 

inlinestatic 
Compare and hash for any structure which begins with a canonical pointer. Assumes all pointers are interchangeable, which is sort of already assumed by gcc elsewhere IIRC.

inlinestatic 
bool subrange_type_for_debug_p  (  const_tree  , 
tree *  ,  
tree *  
) 
Given a tree EXP, a FIELD_DECL F, and a replacement value R, return a tree with all occurrences of references to F in a PLACEHOLDER_EXPR replaced by R. Also handle VAR_DECLs and CONST_DECLs. Note that we assume here that EXP contains only arithmetic expressions or CALL_EXPRs with PLACEHOLDER_EXPRs occurring only in their argument list.
Similar, but look for a PLACEHOLDER_EXPR in EXP and find a replacement for it within OBJ, a tree that is an object or a chain of references.
int supports_one_only  (  void  ) 
Returns 1 if the target configuration supports defining public symbols so that one of them will be chosen at link time instead of generating a multiplydefined symbol error, whether through the use of weak symbols or a targetspecific mechanism for having duplicates discarded.
enum tree_code swap_tree_comparison  (  enum  tree_code  ) 
void tm_malloc_replacement  (  tree  ) 
bool tree_binary_nonnegative_warnv_p  (  enum tree_code  code, 
tree  type,  
tree  op0,  
tree  op1,  
bool *  strict_overflow_p  
) 
Return true if (CODE OP0 OP1) is known to be nonnegative. If the return value is based on the assumption that signed overflow is undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change *STRICT_OVERFLOW_P.
zero_extend(x) + zero_extend(y) is nonnegative if x and y are both unsigned and at least 2 bits shorter than the result.
x * x is always nonnegative for floating point x or without overflow.
zero_extend(x) * zero_extend(y) is nonnegative if x and y are both unsigned and their total bits is shorter than the result.
We don't know sign of `t', so be conservative and return false.
bool tree_binary_nonzero_warnv_p  (  enum tree_code  code, 
tree  type,  
tree  op0,  
tree  op1,  
bool *  strict_overflow_p  
) 
Return true when (CODE OP0 OP1) is an address and is known to be nonzero. For floating point we further ensure that T is not denormal. Similar logic is present in nonzero_address in rtlanal.h. If the return value is based on the assumption that signed overflow is undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change *STRICT_OVERFLOW_P.
With the presence of negative values it is hard to say something.
One of operands must be positive and the other nonnegative.
We don't set *STRICT_OVERFLOW_P here: even if this value overflows, on a twoscomplement machine the sum of two nonnegative numbers can never be zero.
When both operands are nonzero, then MAX must be too.
MAX where operand 0 is positive is positive.
MAX where operand 1 is positive is positive.
bool tree_call_nonnegative_warnv_p  (  tree  type, 
tree  fndecl,  
tree  arg0,  
tree  arg1,  
bool *  strict_overflow_p  
) 
Return true if T is known to be nonnegative. If the return value is based on the assumption that signed overflow is undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change *STRICT_OVERFLOW_P.
Always true.
sqrt(0.0) is 0.0.
True if the 1st argument is nonnegative.
True if the 1st OR 2nd arguments are nonnegative.
True if the 1st AND 2nd arguments are nonnegative.
True if the 2nd argument is nonnegative.
True if the 1st argument is nonnegative or the second argument is an even integer.
True if the 1st argument is nonnegative or the second argument is an even integer valued real.
Referenced by symbolic_range_p().

inline 

inline 

inline 

inline 

inline 

inline 
References tree_check_failed(), and tree_operand_check_failed().
void tree_check_failed  (  const_tree  node, 
const char *  file,  
int  line,  
const char *  function,  
...  
) 
Complain that the tree code of NODE does not match the expected 0 terminated list of trailing codes. The trailing code list can be empty, for a more vague error message. FILE, LINE, and FUNCTION are of the caller.
References build_variant_type_copy(), and targetm.
Referenced by tree_check5().

inline 
void tree_class_check_failed  (  const_tree  node, 
const enum tree_code_class  cl,  
const char *  file,  
int  line,  
const char *  function  
) 
Similar to tree_check_failed, except that we check for a class of tree code, given in CL.
References build_function_type(), builtin_decl_explicit_p(), and local_define_builtin().
size_t tree_code_size  (  enum  tree_code  ) 
Compute the number of bytes occupied by a tree with code CODE. This function cannot be used for TREE_VEC codes, which are of variable length.
Make a new TREE_LIST node from specified PURPOSE, VALUE and CHAIN.
void tree_contains_struct_check_failed  (  const_tree  node, 
const enum tree_node_structure_enum  en,  
const char *  file,  
int  line,  
const char *  function  
) 
Similar to tree_class_check_failed, except that we check for whether CODE contains the tree structure identified by EN.
unsigned int tree_decl_map_hash  (  const void *  ) 
bool tree_expr_nonnegative_p  (  tree  ) 
bool tree_expr_nonnegative_warnv_p  (  tree  , 
bool *  
) 
bool tree_expr_nonzero_p  (  tree  ) 
bool tree_expr_nonzero_warnv_p  (  tree  , 
bool *  
) 
tree tree_expr_size  (  const_tree  ) 
int tree_floor_log2  (  const_tree  ) 
int tree_int_cst_compare  (  const_tree  , 
const_tree  
) 
int tree_int_cst_equal  (  const_tree  , 
const_tree  
) 
int tree_int_cst_lt  (  const_tree  , 
const_tree  
) 
unsigned int tree_int_cst_min_precision  (  tree  , 
bool  
) 
int tree_int_cst_sgn  (  const_tree  ) 
int tree_int_cst_sign_bit  (  const_tree  ) 
bool tree_invalid_nonnegative_warnv_p  (  tree  t, 
bool *  strict_overflow_p  
) 
int tree_log2  (  const_tree  ) 
HOST_WIDE_INT tree_low_cst  (  const_tree  , 
int  
) 
int tree_map_base_eq  (  const void *  , 
const void *  
) 
In tree.c.
unsigned int tree_map_base_hash  (  const void *  ) 
int tree_map_base_marked_p  (  const void *  ) 
unsigned int tree_map_hash  (  const void *  ) 
enum tree_node_structure_enum tree_node_structure  (  const_tree  ) 
Return which tree structure is used by T.
location_t tree_nonartificial_location  (  tree  ) 

inline 

inline 

inline 
References tree_operand_check_failed().

inline 
References tree_operand_check_failed().

inline 
void tree_not_check_failed  (  const_tree  node, 
const char *  file,  
int  line,  
const char *  function,  
...  
) 
Complain that the tree code of NODE does match the expected 0 terminated list of trailing codes. FILE, LINE, and FUNCTION are of the caller.
void tree_not_class_check_failed  (  const_tree  node, 
const enum tree_code_class  cl,  
const char *  file,  
int  line,  
const char *  function  
) 
Similar to tree_check_failed, except that we check that a tree does not have the specified code, given in CL.

inline 
Special checks for TREE_OPERANDs.
References build5_stat().

inline 
void tree_operand_check_failed  (  int  idx, 
const_tree  exp,  
const char *  file,  
int  line,  
const char *  function  
) 
Similar to above, except that the check is for the bounds of the operand vector of an expression node EXP.
Referenced by tree_check5(), tree_not_check3(), and tree_not_check4().

inlinestatic 
Compute the number of operands in an expression node NODE. For tcc_vl_exp nodes like CALL_EXPRs, this is stored in the node itself, otherwise it is looked up from the node's code.
References build4_stat().
varasm.c
Referenced by gimplify_init_ctor_eval().
tree tree_overlaps_hard_reg_set  (  tree  , 
HARD_REG_SET *  
) 
Silly ifdef to avoid having all includers depend on hardregset.h.

inline 
void tree_range_check_failed  (  const_tree  node, 
const char *  file,  
int  line,  
const char *  function,  
enum tree_code  c1,  
enum tree_code  c2  
) 
Similar to tree_check_failed, except that instead of specifying a dozen codes, use the knowledge that they're all sequential.
References build_function_type_list(), builtin_decl_explicit_p(), and local_define_builtin().
bool tree_single_nonnegative_warnv_p  (  tree  t, 
bool *  strict_overflow_p  
) 
bool tree_single_nonzero_warnv_p  (  tree  , 
bool *  
) 
size_t tree_size  (  const_tree  ) 
Compute the number of bytes occupied by 'node'. This routine only looks at TREE_CODE and, if the code is TREE_VEC, TREE_VEC_LENGTH.
bool tree_swap_operands_p  (  const_tree  , 
const_tree  ,  
bool  
) 

inlinestatic 
Constructs double_int from tree CST.
Referenced by add_loc_list(), backtrace_base_for_ref(), build_function_type_list_1(), build_method_type_directly(), cgraph_create_function_alias(), create_mul_ssa_cand(), finish_builtin_struct(), merge_comps(), native_interpret_vector(), quad_int_pair_sort(), tree_to_aff_combination(), tree_unary_nonnegative_warnv_p(), and vrp_int_const_binop().
bool tree_unary_nonnegative_warnv_p  (  enum tree_code  code, 
tree  type,  
tree  op0,  
bool *  strict_overflow_p  
) 
Return true if (CODE OP0) is known to be nonnegative. If the return value is based on the assumption that signed overflow is undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change *STRICT_OVERFLOW_P.
We can't return 1 if flag_wrapv is set because ABS_EXPR<INT_MIN> = INT_MIN.
We don't know sign of `t', so be conservative and return false.
References build_fixed(), build_real(), fixed_arithmetic(), FIXED_VALUE_TYPE, force_fit_type_double(), double_int::neg_with_overflow(), real_value_negate(), and tree_to_double_int().
Referenced by range_is_null().
bool tree_unary_nonzero_warnv_p  (  enum tree_code  code, 
tree  type,  
tree  op0,  
bool *  strict_overflow_p  
) 
Return true when (CODE OP0) is an address and is known to be nonzero. For floating point we further ensure that T is not denormal. Similar logic is present in nonzero_address in rtlanal.h. If the return value is based on the assumption that signed overflow is undefined, set *STRICT_OVERFLOW_P to true; otherwise, don't change *STRICT_OVERFLOW_P.

inline 
void tree_vec_elt_check_failed  (  int  idx, 
int  len,  
const char *  file,  
int  line,  
const char *  function  
) 
Similar to above, except that the check is for the bounds of a TREE_VEC's (dynamically sized) vector.

inlinestatic 
Return nonzero if CODE is a tree code that represents a truth value.
References builtin_info.
bool type_contains_placeholder_p  (  tree  ) 
Return true if any part of the structure of TYPE involves a PLACEHOLDER_EXPR directly. This includes size, bounds, qualifiers (for QUAL_UNION_TYPE) and field positions.
Given a hashcode and a ..._TYPE node (for which the hashcode was made), return a canonicalized ..._TYPE node, so that duplicates are not made. How the hash code is computed is up to the caller, as long as any two callers that could hash identicallooking type nodes agree.
bool type_in_anonymous_namespace_p  (  tree  ) 
int type_list_equal  (  const_tree  , 
const_tree  
) 
int type_num_arguments  (  const_tree  ) 
bool typedef_variant_p  (  tree  ) 
tree uniform_vector_p  (  const_tree  ) 
Given a vector VEC, return its first element if all elements are the same. Otherwise return NULL_TREE.
unsigned int update_alignment_for_field  (  record_layout_info  rli, 
tree  field,  
unsigned int  known_align  
) 
FIELD is about to be added to RLI>T. The alignment (in bits) of the next available location within the record is given by KNOWN_ALIGN. Update the variable alignment fields in RLI, and return the alignment to give the FIELD.
The alignment required for FIELD.
The type of this field.
True if the field was explicitly aligned by the user.
Do not attempt to align an ERROR_MARK node
Lay out the field so we know what alignment it needs.
Record must have at least as much alignment as any field. Otherwise, the alignment of the field within the record is meaningless.
Here, the alignment of the underlying type of a bitfield can affect the alignment of a record; even a zerosized field can do this. The alignment should be to the alignment of the type, except that for zerosize bitfields this only applies if there was an immediately prior, nonzerosize bitfield. (That's the way it is, experimentally.)
Named bitfields cause the entire structure to have the alignment implied by their type. Some targets also apply the same rules to unnamed bitfields.
Targets might chose to handle unnamed and hence possibly zerowidth bitfield. Those are not influenced by #pragmas or packed attributes.
The alignment of the record is increased to the maximum of the current alignment, the alignment indicated on the field (i.e., the alignment specified by an __aligned__ attribute), and the alignment indicated by the type of the field.
bool use_register_for_decl  (  const_tree  ) 
void using_eh_for_cleanups  (  void  ) 
This routine is called from front ends to indicate eh should be used for cleanups.
bool using_eh_for_cleanups_p  (  void  ) 
Query whether EH is used for cleanups.
bool valid_constant_size_p  (  const_tree  ) 
bool validate_arglist  (  const_tree  , 
...  
) 
variable_size (EXP) is like save_expr (EXP) except that it is for the special case of something that is part of a variable size for a data type. It makes special arrangements to compute the value at the right time when the data type belongs to a function parameter.
bool vec_member  (  const_tree  , 
vec< tree, va_gc > *  
) 
enum machine_mode vector_type_mode  (  const_tree  ) 
Vector types need to check target flags to determine type.
bool virtual_method_call_p  (  tree  ) 
tree walk_tree_1  (  tree *  tp, 
walk_tree_fn  func,  
void *  data,  
struct pointer_set_t *  pset,  
walk_tree_lh  lh  
) 
In treeinline.c
Apply FUNC to all the subtrees of TP in a preorder traversal. FUNC is called with the DATA and the address of each subtree. If FUNC returns a nonNULL value, the traversal is stopped, and the value returned by FUNC is returned. If PSET is nonNULL it is used to record the nodes visited, and to avoid visiting a node more than once.
Skip empty subtrees.
Don't walk the same tree twice, if the user has requested that we avoid doing so.
Call the function.
If we found something, return it.
Even if we didn't, FUNC may have decided that there was nothing interesting below this point in the tree.
But we still need to check our siblings.
None of these have subtrees other than those already walked above.
Walk all elements but the first.
Now walk the first one as a tail call.
Walk the DECL_INITIAL and DECL_SIZE. We don't want to walk into declarations that are just mentioned, rather than declared; they don't really belong to this part of the tree. And, we can see cycles: the initializer for a declaration can refer to the declaration itself.
FALLTHRU
TARGET_EXPRs are peculiar: operands 1 and 3 can be the same. But, we only want to walk once.
If this is a TYPE_DECL, walk into the fields of the type that it's defining. We only want to walk into these fields of a type in this case and not in the general case of a mere reference to the type. The criterion is as follows: if the field can be an expression, it must be walked only here. This should be in keeping with the fields that are directly gimplified in gimplify_type_sizes in order for the mark/copyifshared/unmark machinery of the gimplifier to work with variablesized types. Note that DECLs get walked as part of processing the BIND_EXPR.
Call the function for the type. See if it returns anything or doesn't want us to continue. If we are to continue, walk both the normal fields and those for the declaration case.
But do not walk a pointedto type since it may itself need to be walked in the declaration case if it isn't anonymous.
If this is a record type, also walk the fields.
We'd like to look at the type of the field, but we can easily get infinite recursion. So assume it's pointed to elsewhere in the tree. Also, ignore things that aren't fields.
Same for scalar types.
FALLTHRU
Walk over all the subtrees of this operand.
Go through the subtrees. We need to do this in forward order so that the scope of a FOR_EXPR is handled properly.
If this is a type, walk the needed fields in the type.
We didn't find what we were looking for.
tree walk_tree_without_duplicates_1  (  tree *  tp, 
walk_tree_fn  func,  
void *  data,  
walk_tree_lh  lh  
) 
Like walk_tree, but does not walk duplicate nodes more than once.
References types_same_for_odr().
HOST_WIDEST_INT widest_int_cst_value  (  const_tree  ) 
int folding_initializer 
In foldconst.c
Nonzero if we are folding constants inside an initializer; zero otherwise.
@verbatim
Fold a constant subtree into a single node for Ccompiler Copyright (C) 19872013 Free Software Foundation, Inc.
This file is part of GCC.
GCC is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version.
GCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with GCC; see the file COPYING3. If not see http://www.gnu.org/licenses/.
The entry points in this file are fold, size_int_wide and size_binop. fold takes a tree as argument and returns a simplified tree. size_binop takes a tree code for an arithmetic operation and two operands that are trees, and produces a tree for the result, assuming the type comes from `sizetype'. size_int takes an integer value, and creates a tree constant with type from `sizetype'. Note: Since the folders get called on nongimple code as well as gimple code, we need to handle GIMPLE tuples as well as their corresponding tree equivalents.
Nonzero if we are folding constants inside an initializer; zero otherwise.
bool force_folding_builtin_constant_p 
In builtins.c
Nonzero if __builtin_constant_p should be folded right away.
Nonzero if __builtin_constant_p should be folded right away.