In this paper, we introduce a general class of time discounting, which may exhibit present bias or future bias, to repeated games with perfect monitoring. A strategy profile is called an agent subgame perfect equilibrium if there is no profitable one-shot deviation by any player at any history. We study strongly symmetric agent subgame perfect equilibria for repeated games with a symmetric stage game. We find that the worst punishment equilibrium takes different forms for different types of bias. When players are future-biased or have quasi-hyperbolic discounting, the worst punishment payoff can be achieved by a version of stick-and-carrot strategies. When players are present-biased, the worst punishment path may fluctuate over time forever. We also find that the stage-game minmax payoff does not serve as a tight lower bound for the limit equilibrium payoff set. The worst punishment payoff can be below the minmax payoff with future bias and above the minmax payoff with present bias, even when players are very patient. Lastly, we compare the effect of making players interact more frequently and the effect of making them more patient for a given intertemporal bias structure defined on continuous time.