温馨提示:本站仅提供公开网络链接索引服务,不存储、不篡改任何第三方内容,所有内容版权归原作者所有
AI智能索引来源:http://www.ibm.com/think/topics/machine-learning-pipeline
点击访问原文链接

What Is a Machine Learning Pipeline? | IBM

WelcomeOverviewMachine learning typesMachine learning algorithmsStatistical machine learningLinear algebra for machine learningUncertainty quantificationBias variance tradeoffBayesian StatisticsSingular value decompositionOverviewFeature selectionFeature extractionVector embeddingLatent spacePrincipal component analysisLinear discriminant analysisUpsamplingDownsamplingSynthetic dataData leakageOverviewLinear regressionLasso regressionRidge regressionState space modelTime seriesAutoregressive modelOverviewDecision treesK-nearest neighbors (KNNs)Naive bayesRandom forestSupport vector machineLogistic regressionOverviewBoostingBaggingGradient boostingGradient boosting classifierOverviewTransfer learningOverviewOverviewK means clusteringHierarchical clusteringA priori algorithmGaussian mixture modelAnomaly detectionOverviewCollaborative filteringContent based filteringOverviewReinforcement learning human feedbackOverviewOverviewBackpropagationEncoder-decoder modelRecurrent neural networksLong short-term memory (LSTM)Convolutional neural networksOverviewAttention mechanismGrouped query attentionPositional encodingAutoencoderMamba modelGraph neural networkOverviewGenerative modelGenerative AI vs. predictive AIOverviewReasoning modelsSmall language modelsInstruction tuningLLM parametersLLM temperatureLLM benchmarksLLM customizationDiffusion modelsVariational autoencoder (VAE)Generative adversarial networks (GANs)OverviewVision language modelsTutorial: Build an AI stylistTutorial: Multimodal AI queries using LlamaTutorial: Multimodal AI queries using PixtralTutorial: Automatic podcast transcription with GraniteTutorial: PPT AI image analysis answering systemOverviewGraphRAGTutorial: Build a multimodal RAG system with Docling and GraniteTutorial: Evaluate RAG pipline using RagasTutorial: RAG chunking strategiesTutorial: Graph RAG using knowledge graphsTutorial: Inference scaling to improve multimodal RAGOverviewVibe codingVisit the 2025 Guide to AI AgentsLLM trainingOverviewLoss functionTraining dataModel parametersGradient descentStochastic gradient descentHyperparameter tuningLearning rateOverviewParameter efficient fine tuning (PEFT)LoRATutorial: Fine tuning Granite model with LoRARegularizationFoundation modelsOverfittingUnderfittingFew shot learningZero shot learningKnowledge distillationMeta learningData augmentationCatastrophic forgettingOverviewScikit-learnXGboostPyTorchOverviewAI lifecyleAI inferenceModel deploymentMachine learning pipelineData labelingModel risk managementModel driftAutoMLModel selectionFederated learningDistributed machine learningAI stackOverviewNatural language understandingOverviewSentiment analysisTutorial: Spam text classifier with PyTorchMachine translationOverviewInformation retrievalInformation extractionTopic modelingLatent semantic analysisLatent Dirichlet AllocationNamed entity recognitionWord embeddingsBag of wordsIntelligent searchSpeech recognitionStemming and lemmatizationText summarizationConversational AIConversational analyticsNatural language generationOverviewImage classificationObject detectionInstance segmentationSemantic segmentationOptical character recognitionImage recognitionVisual inspectionIvan BelcicCole Strykermachine learningworkflowsData scientistsmachine learning algorithmmachine learning operations (MLOps)automatedautomated machine learning (AutoML)data pipelinedata warehousedatasetIBM Privacy StatementWatch the seriesfeature engineeringdata processingData automationsynthetic dataexploratory data analysisTransforming the dataData cleaningData integrationdata visualizationFeature selectionfeature extractiondeep learningAI modellinear regressionlogistic regressionrandom forestsdecision treesneural networkslarge language models (LLMs)support vector machines (SVMs)ensemble modelsagentic systemsgenerative AIoverfittingHyperparameter tuningModel trainingSupervised learninglabeledUnsupervised learningSemi-supervised learning:Reinforcement learning:LLM evaluationAPIAI lifecyclemodel driftcybersecurityscikit-learnAutomated machine learningvisualizationsDevOpscontinuous integrationCI/CDAI orchestrationSpotifyApple PodcastsFind more episodesIvan BelcicCole StrykerEbook Data science and MLOps for data leaders Align with other leaders on the 3 key goals of MLOps and trustworthy AI: trust in data, trust in models and trust in processes. Read the ebookTechsplainers Podcast MLOps explained Techsplainers by IBM breaks down the essentials of MLOps, from key concepts to real‑world use cases. Clear, quick episodes help you learn the fundamentals fast. Listen nowAI models Explore IBM Granite IBM® Granite® is our family of open, performant and trusted AI models, tailored for business and optimized to scale your AI applications. Explore language, code, time series and guardrail options. Meet GraniteEbook Unlock the power of generative AI and ML Learn how to incorporate generative AI, machine learning and foundation models into your business operations for improved performance. Read the ebookEbook How to choose the right foundation model Learn how to select the most suitable AI foundation model for your use case. Read the ebookExplainer What is machine learning? Machine learning is a branch of AI and computer science that focuses on using data and algorithms to enable AI to imitate the way that humans learn. Read the articleGuide How to thrive in this new era of AI with trust and confidence Dive into the 3 critical elements of a strong AI strategy: creating a competitive edge, scaling AI across the business and advancing trustworthy AI. Read the guideExplore watsonx.aiExplore AI development toolsExplore AI servicesExplore watsonx.aiBook a live demo

智能索引记录