温馨提示:本站仅提供公开网络链接索引服务,不存储、不篡改任何第三方内容,所有内容版权归原作者所有
AI智能索引来源:http://www.ibm.com/think/topics/llm-parameters
点击访问原文链接

What Are LLM Parameters? | IBM

WelcomeOverviewMachine learning typesMachine learning algorithmsStatistical machine learningLinear algebra for machine learningUncertainty quantificationBias variance tradeoffBayesian StatisticsSingular value decompositionOverviewFeature selectionFeature extractionVector embeddingLatent spacePrincipal component analysisLinear discriminant analysisUpsamplingDownsamplingSynthetic dataData leakageOverviewLinear regressionLasso regressionRidge regressionState space modelTime seriesAutoregressive modelOverviewDecision treesK-nearest neighbors (KNNs)Naive bayesRandom forestSupport vector machineLogistic regressionOverviewBoostingBaggingGradient boostingGradient boosting classifierOverviewTransfer learningOverviewOverviewK means clusteringHierarchical clusteringA priori algorithmGaussian mixture modelAnomaly detectionOverviewCollaborative filteringContent based filteringOverviewReinforcement learning human feedbackOverviewOverviewBackpropagationEncoder-decoder modelRecurrent neural networksLong short-term memory (LSTM)Convolutional neural networksOverviewAttention mechanismGrouped query attentionPositional encodingAutoencoderMamba modelGraph neural networkOverviewGenerative modelGenerative AI vs. predictive AIOverviewReasoning modelsSmall language modelsInstruction tuningLLM parametersLLM temperatureLLM benchmarksLLM customizationDiffusion modelsVariational autoencoder (VAE)Generative adversarial networks (GANs)OverviewVision language modelsTutorial: Build an AI stylistTutorial: Multimodal AI queries using LlamaTutorial: Multimodal AI queries using PixtralTutorial: Automatic podcast transcription with GraniteTutorial: PPT AI image analysis answering systemOverviewGraphRAGTutorial: Build a multimodal RAG system with Docling and GraniteTutorial: Evaluate RAG pipline using RagasTutorial: RAG chunking strategiesTutorial: Graph RAG using knowledge graphsTutorial: Inference scaling to improve multimodal RAGOverviewVibe codingVisit the 2025 Guide to AI AgentsLLM trainingOverviewLoss functionTraining dataModel parametersGradient descentStochastic gradient descentHyperparameter tuningLearning rateOverviewParameter efficient fine tuning (PEFT)LoRATutorial: Fine tuning Granite model with LoRARegularizationFoundation modelsOverfittingUnderfittingFew shot learningZero shot learningKnowledge distillationMeta learningData augmentationCatastrophic forgettingOverviewScikit-learnXGboostPyTorchOverviewAI lifecyleAI inferenceModel deploymentMachine learning pipelineData labelingModel risk managementModel driftAutoMLModel selectionFederated learningDistributed machine learningAI stackOverviewNatural language understandingOverviewSentiment analysisTutorial: Spam text classifier with PyTorchMachine translationOverviewInformation retrievalInformation extractionTopic modelingLatent semantic analysisLatent Dirichlet AllocationNamed entity recognitionWord embeddingsBag of wordsIntelligent searchSpeech recognitionStemming and lemmatizationText summarizationConversational AIConversational analyticsNatural language generationOverviewImage classificationObject detectionInstance segmentationSemantic segmentationOptical character recognitionImage recognitionVisual inspectionIvan BelcicCole Strykerlarge language model’s (LLM)large language model (LLM)datasetIBM Privacy Statementartificial intelligence (AI)machine learning (ML)loss functionmodel’s parametersBackpropagationAI modelfine-tuningtraining datahyperparameter tuningmodel tuningprompt engineeringLLM customizationLLM benchmarksgenerative AIcontext windowGo to episodeneural networksGPT-4LlamaGeminitransformer modelsopen sourceLLM customizationoverfittingLLM temperaturetext generationchatbotChatGPTsmall language modelsvibe codinggradient descentParameter-efficient fine-tuning (PEFT)Transfer learningEbook How to choose the right foundation model Learn how to choose the right approach in preparing datasets and employing foundation models. Read the ebookIBV Report The enterprise in 2030: Engineered for perpetual innovation Discover our five predictions about what will define the most successful enterprises in 2030 and the steps leaders can take to gain an AI-first advantage. Read the reportAI models Explore IBM Granite Discover IBM Granite®, our family of open, performant and trusted AI models, tailored for business and optimized to scale your AI applications. Explore language, code, time series and guardrail options. Meet GraniteTechsplainers podcast Large language models explained Techsplainers by IBM breaks down the essentials of LLMs, from key concepts to real‑world use cases. Clear, quick episodes help you learn the fundamentals fast. Listen nowEbook How to choose the right foundation model Learn how to select the most suitable AI foundation model for your use case. Read the ebookArticle Discover the power of LLMs Dive into IBM Developer articles, blogs and tutorials to deepen your knowledge of LLMs. Explore the articlesGuide The CEO’s guide to model optimization Learn how to continually push teams to improve model performance and outpace the competition by using the latest AI techniques and infrastructure. Read the guideReport A differentiated approach to AI foundation models Explore the value of enterprise-grade foundation models that provide trust, performance and cost-effective benefits to all industries. Read the reportEbook Unlock the power of generative AI and ML Learn how to incorporate generative AI, machine learning and foundation models into your business operations for improved performance. Read the ebookDiscover watsonx.aiExplore AI development toolsExplore AI servicesExplore watsonx.aiExplore AI development tools

智能索引记录