GCC Middle and Back End API Reference
tree-ssa-loop-manip.h File Reference

Go to the source code of this file.

Typedefs

typedef void(* transform_callback )(struct loop *, void *)

Functions

void create_iv (tree, tree, tree, struct loop *, gimple_stmt_iterator *, bool, tree *, tree *)
void rewrite_into_loop_closed_ssa (bitmap, unsigned)
void verify_loop_closed_ssa (bool)
basic_block split_loop_exit_edge (edge)
basic_block ip_end_pos (struct loop *)
basic_block ip_normal_pos (struct loop *)
void standard_iv_increment_position (struct loop *, gimple_stmt_iterator *, bool *)
bool gimple_duplicate_loop_to_header_edge (struct loop *, edge, unsigned int, sbitmap, edge, vec< edge > *, int)
bool can_unroll_loop_p (struct loop *loop, unsigned factor, struct tree_niter_desc *niter)
void tree_transform_and_unroll_loop (struct loop *, unsigned, edge, struct tree_niter_desc *, transform_callback, void *)
void tree_unroll_loop (struct loop *, unsigned, edge, struct tree_niter_desc *)
tree canonicalize_loop_ivs (struct loop *, tree *, bool)

Typedef Documentation

typedef void(* transform_callback)(struct loop *, void *)

Function Documentation

bool can_unroll_loop_p ( struct loop loop,
unsigned  factor,
struct tree_niter_desc niter 
)
   Returns true if we can unroll LOOP FACTOR times.  Number
   of iterations of the loop is returned in NITER.  
     Check whether unrolling is possible.  We only want to unroll loops
     for that we are able to determine number of iterations.  We also
     want to split the extra iterations of the loop from its end,
     therefore we require that the loop has precisely one
     exit.  
         Scalar evolutions analysis might have copy propagated
         the abnormal ssa names into these expressions, hence
         emitting the computations based on them during loop
         unrolling might create overlapping life ranges for
         them, and failures in out-of-ssa.  
     And of course, we must be able to duplicate the loop.  
     The final loop should be small enough.  

References affine_iv_d::base, tree_niter_desc::bound, tree_niter_desc::cmp, tree_niter_desc::control, lower_bound_in_type(), affine_iv_d::step, tree_int_cst_sign_bit(), and upper_bound_in_type().

tree canonicalize_loop_ivs ( struct loop ,
tree ,
bool   
)
void create_iv ( tree  base,
tree  step,
tree  var,
struct loop loop,
gimple_stmt_iterator incr_pos,
bool  after,
tree var_before,
tree var_after 
)
   Creates an induction variable with value BASE + STEP * iteration in LOOP.
   It is expected that neither BASE nor STEP are shared with other expressions
   (unless the sharing rules allow this).  Use VAR as a base var_decl for it
   (if NULL, a new temporary will be created).  The increment will occur at
   INCR_POS (after it if AFTER is true, before it otherwise).  INCR_POS and
   AFTER can be computed using standard_iv_increment_position.  The ssa versions
   of the variable before and after increment will be stored in VAR_BEFORE and
   VAR_AFTER (unless they are NULL).  
     For easier readability of the created code, produce MINUS_EXPRs
     when suitable.  
     Gimplify the step if necessary.  We put the computations in front of the
     loop (i.e. the step should be loop invariant).  

References add_phi_arg(), create_phi_node(), force_gimple_operand(), gimple_build_assign_with_ops(), gsi_insert_after(), gsi_insert_before(), gsi_insert_seq_on_edge_immediate(), GSI_NEW_STMT, loop::header, loop_latch_edge(), loop_preheader_edge(), make_ssa_name(), make_temp_ssa_name(), mark_addressable(), may_negate_without_overflow_p(), tree_expr_nonnegative_warnv_p(), and tree_int_cst_lt().

bool gimple_duplicate_loop_to_header_edge ( struct loop loop,
edge  e,
unsigned int  ndupl,
sbitmap  wont_exit,
edge  orig,
vec< edge > *  to_remove,
int  flags 
)
   The same as cfgloopmanip.c:duplicate_loop_to_header_edge, but also
   updates the PHI nodes at start of the copied region.  In order to
   achieve this, only loops whose exits all lead to the same location
   are handled.

   Notice that we do not completely update the SSA web after
   duplication.  The caller is responsible for calling update_ssa
   after the loop has been duplicated.  
     ???  This forces needless update_ssa calls after processing each
     loop instead of just once after processing all loops.  We should
     instead verify that loop-closed SSA form is up-to-date for LOOP
     only (and possibly SSA form).  For now just skip verifying if
     there are to-be renamed variables.  
     Readd the removed phi args for e.  
     Copy the phi node arguments.  
basic_block ip_end_pos ( struct loop )
basic_block ip_normal_pos ( struct loop )
void rewrite_into_loop_closed_ssa ( bitmap  ,
unsigned   
)
basic_block split_loop_exit_edge ( edge  )
void standard_iv_increment_position ( struct loop loop,
gimple_stmt_iterator bsi,
bool *  insert_after 
)
   Stores the standard position for induction variable increment in LOOP
   (just before the exit condition if it is available and latch block is empty,
   end of the latch block otherwise) to BSI.  INSERT_AFTER is set to true if
   the increment should be inserted after *BSI.  

References cfun, LOOP_CLOSED_SSA, LOOPS_HAVE_PREHEADERS, LOOPS_HAVE_SIMPLE_LATCHES, loops_state_satisfies_p(), and need_ssa_update_p().

void tree_transform_and_unroll_loop ( struct loop ,
unsigned  ,
edge  ,
struct tree_niter_desc ,
transform_callback  ,
void *   
)
     Let us assume that the unrolled loop is quite likely to be entered.  
     The values for scales should keep profile consistent, and somewhat close
     to correct.

     TODO: The current value of SCALE_REST makes it appear that the loop that
     is created by splitting the remaining iterations of the unrolled loop is
     executed the same number of times as the original loop, and with the same
     frequencies, which is obviously wrong.  This does not appear to cause
     problems, so we do not bother with fixing it for now.  To make the profile
     correct, we would need to change the probability of the exit edge of the
     loop, and recompute the distribution of frequencies in its body because
     of this change (scale the frequencies of blocks before and after the exit
     by appropriate factors).  
     Determine the probability of the exit edge of the unrolled loop.  
     Without profile feedback, loops for that we do not know a better estimate
     are assumed to roll 10 times.  When we unroll such loop, it appears to
     roll too little, and it may even seem to be cold.  To avoid this, we
     ensure that the created loop appears to roll at least 5 times (but at
     most as many times as before unrolling).  
     Prepare the cfg and update the phi nodes.  Move the loop exit to the
     loop latch (and make its condition dummy, for the moment).  
     Since the exit edge will be removed, the frequency of all the blocks
     in the loop that are dominated by it must be scaled by
     1 / (1 - exit->probability).  
     Set the probability of new exit to the same of the old one.  Fix
     the frequency of the latch block, by scaling it back by
     1 - exit->probability.  
         Prefer using original variable as a base for the new ssa name.
         This is necessary for virtual ops, and useful in order to avoid
         losing debug info for real ops.  
     Transform the loop.  
     Unroll the loop and remove the exits in all iterations except for the
     last one.  
     Ensure that the frequencies in the loop match the new estimated
     number of iterations, and change the probability of the new
     exit edge.  
     Finally create the new counter for number of iterations and add the new
     exit instruction.  
void tree_unroll_loop ( struct loop loop,
unsigned  factor,
edge  exit,
struct tree_niter_desc desc 
)
   Wrapper over tree_transform_and_unroll_loop for case we do not
   want to transform the loop before unrolling.  The meaning
   of the arguments is the same as for tree_transform_and_unroll_loop.  

Referenced by insn_to_prefetch_ratio_too_small_p().

void verify_loop_closed_ssa ( bool  )