Embodied Question Answering (EQA) requires agents to autonomously explore and comprehend the environment to answer context-dependent questions. Typically, an EQA framework consists of four components: a planner, a memory module, a stopping module, and an answering module. However, the memory module is utilized inefficiently in existing methods, as the information it stores is leveraged solely for the answering module. Such a design may result in redundant or inadequate exploration, leading to a suboptimal success rate. To solve this problem, we propose MemoryEQA, an EQA framework centered on memory, which establishes mechanisms for memory storage, update, and retrieval, allowing memory information to contribute throughout the entire exploration process. Specifically, we convert the observation into structured textual representations, which are stored in a vector library following a fixed structure. At each exploration step, we utilize a viewpoint comparison strategy to determine whether the memory requires updating. Before executing each module, we employ an entropy-based adaptive retrieval strategy to obtain the minimal yet sufficient memory information that satisfies the requirements of different modules. The retrieved module-specific information is then integrated with the current observation as input to the corresponding module. To evaluate EQA models' memory capabilities, we constructed the benchmark based on HM3D called MT-HM3D, comprising 1,587 question-answer pairs involving multiple targets across various regions, which requires agents to maintain memory of exploration-acquired target information. Experimental results on HM-EQA, MT-HM3D, and OpenEQA demonstrate the effectiveness of our framework, where a 9.9% performance gain on MT-HM3D compared to baseline models further underscores the memory capability's pivotal role in solving complex tasks.