news 2026/4/18 8:16:44

DAY 43

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
DAY 43

# DAY 43 随机函数与广播机制

知识点回顾:

1. 随机张量的生成: torch.randn函数

2. 卷积和池化的计算公式 (可以不掌握, 会自动计算的)

3. pytorch的广播机制: 加法和乘法的广播机制

ps: numpy运算也有类似的广播机制, 基本一致

作业: 自己多举几个例子帮助自己理解即可

# CBAM 注意力

知识点回顾:

1. 通道注意力模块复习

2. 空间注意力模块

3. CBAM的定义

作业:尝试对今天的模型检查参数数目,并用 tensorboard 查看训练过程

@浙大疏锦行

# 导入依赖库 import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt # 1. 模型参数统计工具:统计模型总参数、可训练/不可训练参数数量 def count_model_parameters(model): total_params = 0 trainable_params = 0 non_trainable_params = 0 for param in model.parameters(): param_num = param.numel() total_params += param_num if param.requires_grad: trainable_params += param_num else: non_trainable_params += param_num def format_params(num): if num >= 1e6: return f"{num/1e6:.2f}M" elif num >= 1e3: return f"{num/1e3:.2f}K" else: return f"{num}" print("模型参数统计:") print(f"总参数: {format_params(total_params)} ({total_params:,})") print(f"可训练参数: {format_params(trainable_params)} ({trainable_params:,})") print(f"不可训练参数: {format_params(non_trainable_params)} ({non_trainable_params:,})") return total_params, trainable_params, non_trainable_params # 2. CBAM注意力模块定义 class CBAMBlock(nn.Module): def __init__(self, channels, reduction=16): super().__init__() # 通道注意力 self.avg_pool = nn.AdaptiveAvgPool2d(1) self.max_pool = nn.AdaptiveMaxPool2d(1) self.fc = nn.Sequential( nn.Linear(channels, channels//reduction), nn.ReLU(inplace=True), nn.Linear(channels//reduction, channels) ) # 空间注意力 self.spatial = nn.Sequential( nn.Conv2d(2, 1, kernel_size=7, padding=3, bias=False), nn.Sigmoid() ) self.sigmoid = nn.Sigmoid() def forward(self, x): # 通道注意力计算 avg_out = self.fc(self.avg_pool(x).view(x.size(0), -1)).view(x.size(0), x.size(1), 1, 1) max_out = self.fc(self.max_pool(x).view(x.size(0), -1)).view(x.size(0), x.size(1), 1, 1) channel_att = self.sigmoid(avg_out + max_out) x = x * channel_att # 空间注意力计算 avg_out = torch.mean(x, dim=1, keepdim=True) max_out, _ = torch.max(x, dim=1, keepdim=True) spatial_att = self.spatial(torch.cat([avg_out, max_out], dim=1)) x = x * spatial_att return x # 3. 基于CBAM的CNN模型定义 class CBAM_CNN(nn.Module): def __init__(self, num_classes=10): super().__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, padding=1), nn.ReLU(inplace=True), CBAMBlock(64), nn.MaxPool2d(2, 2), nn.Conv2d(64, 128, kernel_size=3, padding=1), nn.ReLU(inplace=True), CBAMBlock(128), nn.MaxPool2d(2, 2), ) self.classifier = nn.Linear(128 * 8 * 8, num_classes) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) x = self.classifier(x) return x # 4. 训练函数(集成TensorBoard可视化) def train(model, train_loader, test_loader, criterion, optimizer, scheduler, device, epochs, writer): model.train() all_iter_losses = [] iter_indices = [] for epoch in range(epochs): running_loss = 0.0 correct = 0 total = 0 for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() iter_loss = loss.item() global_step = epoch * len(train_loader) + batch_idx + 1 all_iter_losses.append(iter_loss) iter_indices.append(global_step) # TensorBoard记录batch级损失 writer.add_scalar('Train/Batch_Loss', iter_loss, global_step) running_loss += iter_loss _, predicted = output.max(1) total += target.size(0) correct += predicted.eq(target).sum().item() if (batch_idx + 1) % 100 == 0: print(f'Epoch {epoch+1}/{epochs} | Batch {batch_idx+1}/{len(train_loader)} | 单Batch损失: {iter_loss:.4f}') # 记录epoch级训练指标 epoch_train_loss = running_loss / len(train_loader) epoch_train_acc = 100. * correct / total writer.add_scalar('Train/Epoch_Loss', epoch_train_loss, epoch+1) writer.add_scalar('Train/Epoch_Accuracy', epoch_train_acc, epoch+1) # 测试阶段 model.eval() test_loss = 0 correct_test = 0 total_test = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += criterion(output, target).item() _, predicted = output.max(1) total_test += target.size(0) correct_test += predicted.eq(target).sum().item() # 记录epoch级测试指标 epoch_test_loss = test_loss / len(test_loader) epoch_test_acc = 100. * correct_test / total_test writer.add_scalar('Test/Epoch_Loss', epoch_test_loss, epoch+1) writer.add_scalar('Test/Epoch_Accuracy', epoch_test_acc, epoch+1) scheduler.step(epoch_test_loss) print(f'Epoch {epoch+1} 完成 | 训练准确率: {epoch_train_acc:.2f}% | 测试准确率: {epoch_test_acc:.2f}%') writer.close() return epoch_test_acc # 5. 绘图辅助函数 def plot_iter_losses(losses, indices): plt.figure(figsize=(10, 4)) plt.plot(indices, losses, 'b-', alpha=0.7) plt.xlabel('Iteration') plt.ylabel('Loss') plt.title('Training Loss per Iteration') plt.grid(True) plt.show() # 6. 主执行逻辑 if __name__ == "__main__": # 设备配置 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # 数据加载(CIFAR10) transform = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True, num_workers=2) test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False, num_workers=2) # 模型、损失函数、优化器初始化 model = CBAM_CNN().to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', patience=3, factor=0.5) # 统计模型参数 count_model_parameters(model) # 初始化TensorBoard writer = SummaryWriter(log_dir='./cbam_logs') # 启动训练 epochs = 50 print("开始模型训练...") final_accuracy = train(model, train_loader, test_loader, criterion, optimizer, scheduler, device, epochs, writer) print(f"训练完成 | 最终测试准确率: {final_accuracy:.2f}%")
版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/4/18 3:46:36

避免数据泄露风险:为什么企业应选择私有化anything-llm?

避免数据泄露风险:为什么企业应选择私有化 Anything-LLM? 在金融、医疗和法律等行业,一份合同、一条患者记录或一封内部邮件的外泄,都可能引发千万级罚款甚至品牌崩塌。而当企业开始尝试用大模型提升效率时,一个现实问…

作者头像 李华
网站建设 2026/4/18 8:06:54

狼群算法求解柔性车间调度问题的Matlab版:有源码提供学习,可直接运行

狼群算法求解柔性车间调度matlab版 有源码提供学习 可直接运行直接打开Matlab,新建个脚本文件咱们就开干。今天要折腾的是用狼群算法解决柔性车间调度这个硬骨头问题。车间里七八台机器,每个工件还有不同的加工路线,这调度方案能把人绕晕&am…

作者头像 李华
网站建设 2026/4/18 7:38:11

使用Vite#x2B; Lit 构建webcomponent 组件

ViteLit.js 构建Web Component web component作为前一个时代的方案,在特定场景下确有不一样的妙用 ,在维护前后端不分离的项目,web component 是为数不多的选择,整理一下使用Lit 构建自己的web component组件库为传统项目提提速的…

作者头像 李华
网站建设 2026/4/8 20:56:58

好写作AI:从思绪到结构,一键生成清晰论文大纲

你是否曾面对空白文档,脑中想法万千却不知如何下笔组织?是否花费数小时反复调整论文结构,却总觉逻辑不顺?好写作AI全新功能——一键智能大纲生成,正是为破解这一核心难题而来。它帮你将飘散的思绪,瞬间转化…

作者头像 李华
网站建设 2026/4/18 8:00:22

从缺陷回溯到效能提升:如何构建高价值测试案例知识库

测试案例分析的常见困境与价值重估 在多数软件测试团队中,“测试案例分析”并非一个新名词,但其落地形态却常常流于形式。常见的场景是:在项目复盘会上,某个“经典的”或“严重的”Bug被匆匆回顾,讨论停留在“当时没想…

作者头像 李华
网站建设 2026/4/18 6:59:41

如何为多语言知识库配置翻译中间件?i18n支持扩展

如何为多语言知识库配置翻译中间件?i18n支持扩展 在一家跨国企业的技术支持团队中,一位巴西员工用葡萄牙语提问:“Como solicito acesso ao sistema interno?”——几乎同一时刻,另一位德国工程师也在系统中输入德语问题&#x…

作者头像 李华